19,135
Views
131
CrossRef citations to date
0
Altmetric
From the Academy

Official position of the american academy of clinical neuropsychology on serial neuropsychological assessments: the utility and challenges of repeat test administrations in clinical and forensic contexts

, , , , &
Pages 1267-1278 | Accepted 21 Sep 2010, Published online: 24 Nov 2010

Abstract

Serial assessments are now common in neuropsychological practice, and have a recognized value in numerous clinical and forensic settings. These assessments can aid in differential diagnosis, tracking neuropsychological strengths and weaknesses over time, and managing various neurologic and psychiatric conditions. This document provides a discussion of the benefits and challenges of serial neuropsychological testing in the context of clinical and forensic assessments. Recommendations regarding the use of repeated testing in neuropsychological practice are provided.

The value of serial neuropsychological evaluations is recognized in numerous clinical and forensic settings. Serial assessments are now common, and even routine, in some populations and settings, and tracking neuropsychological strengths and weaknesses over time has proven invaluable in the care of patients with developmental disorders, traumatic brain injury, acquired neuropathology, chronic illness, and progressive dementia. Serial testing is also common in forensic settings. This document is intended to provide a brief discussion or benefits and challenges of serial neuropsychological testing in clinical and forensic contexts. Recommendations regarding the use of repeated testing in neuropsychological research and practice are provided.

Benefits of serial neuropsychological testing

Serial neuropsychological assessment has many known benefits. Most notably, comparisons of data points can serve to document the natural progression or resolution of illness and injury (e.g., McCrea et al., Citation2003), which is often critical to differential diagnosis and treatment recommendations. Repeat testing can also be used to measure the clinical effectiveness of pharmacological (e.g., Loring & Meador, Citation2004), surgical (e.g., Blumenthal et al., Citation1995; Ghogawala, Westerveld, & Amin-Hanjani, Citation2008; Rasmussen, Siersman, & The ISPOCD Group, Citation2004), rehabilitation (Cicerone et al., Citation2000, Citation2005), and educational interventions (Feifer, Citation2008; Rabiner & Malone, Citation2004) in patient populations, or the effects of medications in both normal and clinical populations (e.g., Beglinger et al., Citation2004; Rapport, Quinn, DuPaul, Quinn, & Kelly, Citation1989). Although interpreting data from repeat testing can be particularly complex in populations undergoing active or dynamic cognitive change (e.g., children), serial evaluation has utility in monitoring maturational progression by detecting slowed, arrested, or uneven development or the regression of previously acquired skills (Shapiro, Lockman, Balthazor, & Krivit, Citation1995). Repeat testing is common in the special education setting not only for documenting treatment efficacy but also for enabling decisions whether or not continuation of services for the child is warranted.

Finally, much as medical laboratory tests are repeated to confirm initial findings, serial neuropsychological tests have value in determining whether initial findings were definitive or accurate (resulting in a correct diagnosis/recommendation), or inaccurate due to confounding examinee, examiner, or environmental factors. In a forensic context, retesting may serve these same purposes and, in addition, can assist in determining whether the pattern of performances across time follows the expected course of recovery from an injury, as well as possibly providing insight into the credibility of the examinee's presentation.

Challenges in serial neuropsychological testing

Repeated observations and longitudinal comparisons are among the most powerful tools in science. Yet, despite its many merits, serial neuropsychological assessment gives rise to unique challenges. At a minimum, repeated testing yields two sets of scores, each of which is comprised of systematic (ability- and procedure-related) variance and error variance attributable to specific factors affecting the examinee, examiner, environment, and context of each individual testing session.

The primary element requiring careful consideration in serial testing is the practice effect, which refers to the improvement in performance on retesting that is due to previous exposure to the same or similar neuropsychological measure rather than to a true change in the individual's ability (cf. Kaufman, Citation1990). Both methodological (e.g., statistical regression to the mean) and person-specific (e.g., motivation) factors may have explanatory power in understanding the meaning of a change, or a lack of change, on repeated testing. The effects of age, education, and gender may also influence the degree of change in complex ways. It is the examiner's responsibility to judge which factor(s) may influence performance in a scientifically sound way.

A change in scores on repeat testing can lead to different interpretations and can result in diagnostic disagreements, including attributing improved scores to recovery of function or efficacy of intervention when the results are more likely to reflect the effects of test practice. Similarly, the absence of expected practice effects may in one instance reflect insufficient effort, and in another instance may reflect the deleterious effects of an illness or injury. Neuropsychologists who conduct a re-assessment need to be mindful and knowledgeable about, the variables potentially affecting change, including practice effects (i.e., gains related to prior exposure to a test), psychometric factors (test reliability), patient characteristics (demographics, state variables, fatigue, motivation), and disease state of the individual being assessed, and about resulting interpretive complications (Attix et al., Citation2009).

It is important to recognize that the sources of change in repeated assessments can be multi-factorial and usually do not relate to a single construct. Statistical methods that can aid the interpretation of change in neuropsychological test scores include reliable-change index (RCI) scores using the standard error of differences between test and retest scores and standardized regression-based (SRB) change scores. SRB approaches involve the use of regression models to predict retest scores using formulae that account for baseline performance and demographic or other variables that may impact the rate of test–retest change (Chelune, Citation2003). These normative SRB equations can then be used to determine whether the patient's observed rate of change deviates significantly from expectation based on the change predicted from the model (see Hinton-Bayre, Citation2010, for a systematic explication of the calculations and implications of using different SRB models). However, for most tests currently in use, more empirical research is needed to identify relevant clinical change scores and to facilitate their application in clinical and forensic practice (cf. Maassen, Bossema, & Brand, Citation2009).

Clinical context

The effects of repeated examinations have been studied in both normal (e.g., Collie, Maruff, Darby, & McStephen, Citation2003) and clinical samples (Dikmen, Heaton, Grant, & Temkin, Citation1999; Iverson, Citation2001; McCaffrey, Duff, & Westervelt, Citation2000a, Citation2000b; McCaffrey & Westervelt, Citation1995; Moore et al., Citation1990; Wilson, Watson, Baddeley, Emslie, & Evans, Citation2000), generally demonstrating increases in test scores as a result of repeated exposure. Tests requiring speed or an unfamiliar or infrequently practiced mode of response, and those with a single solution are more likely to show significant practice effects when they are repeated (Basso, Bornstein, & Lang, Citation1999; McCaffrey, Ortega, Orsillo, Nelles, & Haase, Citation1992). Within testing domains (cf. executive function), measures may vary considerably with regard to the degree of practice effect (Basso et al., Citation1999). Practice effects are especially challenging—yet important—to consider in serial memory testing because memory measures have been found to produce particularly poor test–retest reliabilities (Dikmen et al., Citation1999). With the knowledge that memory measures generally have lower reliability, and that subsequent exposures to the same memory stimuli represent additional learning trials, Dikmen et al. (Citation1999) recommend use of multiple memory measures and reliance on composite scores, rather than subtest scores. Several studies have shown that scores may be enhanced on retest even when the test items are different (Benedict & Zgaljardic, Citation1998; Wilson et al., Citation2000). This can occur even with the use of alternate forms, as the examinee benefits from knowing how to approach the task more effectively (i.e., the examinee has acquired a test-taking response set or strategy or has become more familiar with the testing procedure). Of course, factors such as the length of the test–retest interval may affect the extent to which the follow-up score may be inflated secondary to the effects of repeated test exposure.

The number of repeat testing sessions is also a factor that deserves consideration, as the degree of gain associated with practice may not be equivalent across multiple repeated testings. With brief test intervals, gains have been reported to stabilize after an initial practice effect on measures of attention and processing speed (Falleti, Maruff, Collie, & Darby, Citation2006). Similarly, tests with a clear ceiling effect may show their strongest practice effects between the first and second test administrations (Benedict & Zgaljardic, Citation1998; Ivnik et al., Citation1999; Rapport et al., Citation1997), with little to no improvement thereafter. In research settings one can correct for an initially strong practice effect by providing two or more baseline evaluations before introducing the experimental condition (McCaffrey & Westervelt, Citation1995). Although some data on test–retest changes are available for a limited number of measures, the field would benefit from the development of more extensive normative change data for use in clinical and research settings that require multiple retesting sessions. (Attix et al., Citation2009).

Large score changes across test administrations are generally not common in patients who have put forth good effort, except for single-solution tests and tests with a significant learning and memory component (Dikmen, Machamer, Temkin, & McLean, Citation1990; McCaffrey et al., Citation2000b). Just as baseline interpretations are strengthened when confirmed by multiple complementary findings, so too are conclusions about change in cognitive functions strengthened by identification of change trends, especially when convergent changes are evident on multiple measures. However, it is critical to appreciate that the degree of gain expected with repeated test exposure varies across neuropsychological tests. Not surprisingly, the underlying variables impacting change vary across measures, and these also may differ from the variables impacting baseline performance (Attix et al., Citation2009). For instance, while age, education, and gender might affect initial performance on a measure, change on that measure might be impacted by only age. This is especially true in pediatric evaluations where the variability in test performance can be inversely related to age, i.e., less variability with increasing age. Variability in test performance, as indexed by standard deviation units, is often greatest in very young children (before the age of 5) with a slow but steady reduction in performance variability that continues throughout childhood and into adolescence, at least on measures of functions that are linearly consistent across development and those that have limited floor and ceiling effects. However, at ages that involve transitions between versions of test measures (e.g., transitioning between preschool and school-age versions of intelligence tests at 6–7 years of age), variability in test scores may be small due to ceiling/floors of the measure for that age. In addition, as brain maturation does not necessarily involve purely linear trends for all cognitive skills, but rather may proceed in rapid developmental progression (i.e., growth spurts), increased variability may be seen for some measures collected during normal periods of accelerated development. For example, increased temporal fluctuation in performance is common on measures of executive functions when sampled during early or pre-adolescent phases during which these skills undergo accelerated development (P. Anderson, V. Anderson, & Garth, Citation2001; Huizinga, Dolan, & van der Molen, Citation2006). Such effects, coupled with practice gains, can result in different change trajectories across measures.

Additional consideration is inherently relevant to repeated evaluation of infants and preschool children. Children at such ages often display reduced control over their attention and behavior, and are more heavily influenced by their external environment than older children (Lopez, Menez, & Hernandez-Guzman, Citation2005; Mahone, Citation2005). Familiarity with test stimuli can negatively influence performance as the stimuli are viewed as less novel and less stimulating, leading to decreased interest and attention by the young child (Courage, Reynolds, & Richards, Citation2006; Rothbart & Posner, Citation2001; Sheese, Rothbart, Posner, White, & Fraundorf, Citation2008).

Finally, in addition to considering whether or not normative change data are available, one must also consider the cognitive process being measured and how this may change with repeated assessments. It is doubtful, for example, that a repeated administration of a straightforward problem-solving task is measuring the same cognitive process as the initial test administration. Indeed, effortful information processing evolves into perceptual speed processing with practice. Therefore, the effects of repeated exposure on the cognitive process (not just the test) under study need to be considered as important variables when interpreting the meaning of the second test score.

In the case of documented severe brain damage, an individual is less likely to improve with practice alone and improvements may be minimal. The degree to which brain injury impacts change may vary with the nature, site, severity, and chronicity of the brain damage in combination with the patient's age, although the influence of these factors has yet to be elucidated. Measuring change after pediatric acquired insult can be particularly challenging, as the child is not only attempting to recover previously developed skills, but also acquire new age-expected skills as well. Test practice effects that are less than expected can reflect deterioration in skills, failure to develop at age-appropriate rates, or the inability to manage age-related increases in complexity on specific tests (Taylor & Alden, Citation1997). Moreover, the characteristics of the test can also determine whether or not performance of patients with brain damage will improve with repetition. For example, Wilson et al. (Citation2000) reported that reaction time shows little practice effect, even over as many as 20 assessments. For these reasons, clinicians must carefully consider how change trajectories might be influenced by the illness or injury under study. Both the nature of the patient's cognitive deficits and the nature of the test can impact change, as might their interaction. Ideally, studies would be available using repeated testing of both normal individuals and clinical samples. Such studies might highlight the tests that are most vulnerable to change with repeated administration and which patient groups tend to be most susceptible to change with repeated test administrations (McCaffrey et al., Citation2000a, Citation2000b).

Repeated testing can shed light on the validity of responses (to be discussed in greater detail in the forensic context section below). When two assessments are made using the same tests, poor test-taking effort may be reflected in the performance, such as the absence of gains for participants who have demonstrated some learning ability. Similarly, variable effort across tests may result in implausible increases or decreases in scores, very different item responses, or wide variations in intratest response patterns (cf. Lezak, Howieson, & Loring, Citation2004). Examination of performance reliability can be an important means by which the neuropsychologist can identify patients who are not consistently putting forth adequate effort (Cullum, Heaton, & Grant, Citation1991). Several tests and test batteries have established test–retest indices that effectively differentiate malingerers from control participants (Dikmen et al., Citation1999; Strauss et al., Citation1999). Performance consistency has also been included in some of the specialized validity measures used during a single assessment (Green, Allen, & Astner, Citation1996).

In clinical research RCI and SRB approaches are particularly well suited for longitudinal studies examining the impact of intervention or natural disease course, and offer a more sophisticated consideration of change beyond subtraction approaches. Such methods have been applied to outcomes research such as cardiac interventions (e.g., Collie, Darby, Falleti, Silbert, & Maruff, Citation2002), epilepsy surgery (e.g., Chelune, Naugle, Luders, Sedlak, & Awad, Citation1993), post-operative cognitive dysfunction (Farag, Chelune, Schubert, & Mascha, Citation2006), and aging (Duff et al., Citation2005). In addition to direct comparisons of scores indicating deviation from expected change across conditions, these methods can be used for the classification of outcomes (e.g., cases demonstrating change 1 SD below expected change are considered a poor outcome, while those above are not) or in the construction of indices to use as predictors of other outcomes.

Forensic context

In addition to the usual factors encountered in clinical evaluations, serial assessments in forensic contexts give rise to a somewhat different set of challenges and benefits. The issue of test practice effects often arises in litigation, especially in personal injury and disability evaluations. For example, a neuropsychologist retained by a defense attorney in civil litigation often wants the opportunity to examine the plaintiff, who has already been evaluated, either in the context of clinical care or by the plaintiff's neuropsychological expert. The issues described above complicate test interpretation regardless of whether test data from a treating psychologist or an expert hired by a plaintiff attorney are made available. A plaintiff attorney may object to a defense expert's examination on the basis that the plaintiff has “already been tested” and no further testing is warranted. Greiffenstein (Citation2009) has reported that some neuropsychologists inappropriately propose a minimum test–retest interval before someone can be examined a second time, seemingly to justify a treating neuropsychologist's argument that practice effects may “mask” or “hide” the underlying neurocognitive deficits they believe to be present. However, such specific suggestions should not be regarded as professional guidelines, as the interval after which practice effects may be minimized varies across measures and there is no consensually agreed-upon minimum test–retest interval between examinations. Frequent serial testing with numerous types of clinical patients and the available relevant literature provides the same means of differentiating real change versus practice effect-related change, whether the individual is evaluated in a forensic context or in a routine clinical context. There simply are no prohibitions of serial testing that are unique to forensic contexts. When such prohibitions are alleged, neuropsychologists can rely on relevant statements of professional organizations to support the position that retesting can be appropriate, necessary and informative. For example, documenting the effects of a patient's performance over repeated evaluations is recommended in the American Academy of Clinical Neuropsychology (AACN) Practice Guidelines for Neuropsychological Assessment and Consultation (AACN, Citation2007).

Determining causation of deficits is often a focus of forensic neuropsychological assessments and serial testing may improve the prospects of doing so because of its ability to document the temporal course of cognitive strengths and weaknesses. Indeed, comparison of test results from different points in time is one means of assessing the potential role of numerous factors, other than brain damage or dysfunction that can affect neuropsychological test scores. For example, a change in test scores may correspond to progression of disease or recovery of function in an acute or subacute disorder, or to factors associated with interim changes in pain, sleep efficiency, life stressors, medication side effects, or effective treatment of related or unrelated medical conditions.

There are several considerations pertaining to a defense expert examining a plaintiff. One is when the plaintiff was first examined. For example, as time passes following an initial evaluation of an individual, a second evaluation may be necessary to document recovery of function, address the permanence of symptoms, or track progression of medical conditions. A second consideration is the purpose of the prior examination. If the initial evaluation was carried out by a treating psychologist, there are often questions related to the cause(s) of the impairment that have not been addressed in the same manner as they would be in a forensic context. For example, the first examiner may not have had access to the same background information (e.g., academic history, details about medical conditions possibly causing neuropsychological impairments). A retained expert usually requests and has support in acquiring difficult to obtain information in order to assist in a determination of causality and prognosis. A third consideration is whether the test battery from the previous examination was sufficient to address specific issues that are known to arise more frequently in forensic practice as compared to clinical assessment contexts. For example, absence of symptom validity testing in the initial evaluation would preclude the degree of thoroughness and appropriate consideration of the possible negative influences of effort and response bias within a forensic context (Heilbronner et al., Citation2009).

As Greiffenstein (Citation2009) aptly notes, no one disputes that practice effects pose an interpretive challenge, but recognition of this challenge does not justify the belief that retesting is prohibited within specific time frames. In reality, the scientific literature does not support such prohibitions. Moreover, examination of the effects of repeated test administration can provide useful information in clinical, forensic, and research settings. Whether present or absent, these effects provide another source of data that needs to be incorporated into diagnostic formulations, relying on relevant literature, when available, that addresses change trajectories.

Recommendations

  1. Neuropsychologists who perform a repeat examination should address the potential influence of test practice effects, and may choose to report specifically how these effects were estimated or otherwise taken into consideration in interpreting the data and reaching conclusions. Rather than viewing repeat testing as primarily a confound, sound neuropsychological practice is best served when neuropsychologists consider change as a measurable construct to be used to inform the clinical descriptive and diagnostic process. Consideration may be given to the standard error of measurement from a test manual, empirical findings on the expected magnitudes of score increases over a particular interval, or other relevant research on test operating characteristics for the instruments employed in the neuropsychologist's battery.

  2. Neuropsychologists remain up-to-date on research addressing test–retest effects, when available and relevant. Empirical data can clarify which tests are most vulnerable to practice effects, as well as which clinical samples tend to be more or less likely to display to such effects.

  3. Neuropsychologists should be familiar with the concept and statistical methods related to reliable change indices (RCI) and regression-based approaches to change measurement and their associated risks and benefits when attempting to account for practice effects.

  4. There is an obvious need for more data on normal change trajectories for all types of measures with all types of demographic variables and patient groups. This is especially salient in pediatric neuropsychological assessments where cognitive development, practice, and recovery can become potentially confounded in repeated assessments. Because of the inherent neurobiological and contextual change that is particularly relevant to both early development and aging, repeat assessment is often essential in identifying atypical maturational progression and in planning and evaluating treatment across different developmental stages of the lifespan.

  5. In a forensic context, neuropsychologists acting as experts should be aware of the relevant factors affecting change as in clinical settings. More data and extra-test information (e.g., effort) should be considered in forensic settings to assist in the interpretation of test–retest findings.

  6. There are no empirical data allowing the development of clinical guidelines regarding minimum test–retest intervals in clinical or forensic settings. In a forensic context, if confronted by an opposing expert who advocates for fixed retesting intervals, the neuropsychologist should be prepared to educate the court (or referral source) on the state of the art and science. Measurement of practice effects represents valuable data bearing on a person's capacity for learning and adaptation. Making clinical sense out of practice effects requires interpretation just as much as any single score. Neuropsychologists are qualified to interpret the significance of test–retest differences, and are especially equipped by virtue of their knowledge of test operating characteristics to understand the many variables contributing to test–retest change over time.

Acknowledgment

The authors gratefully acknowledge the assistance of Kenneth M. Adams, Manfred F. Greiffenstein, Glenn J. Larrabee, Dean Beebe, and Michael Kirkwood for their helpful comments on previous versions of this manuscript.

References

  • American Academy of Clinical Neuropsychology . 2007 . AACN Practice guidelines for neuropsychological assessment and consultation . The Clinical Neuropsychologist , 21 : 209 – 231 .
  • Anderson , P , Anderson , V and Garth , J . 2001 . Assessment and development of organizational ability: The Rey Complex Figure Organizational Strategy Score (RCF-OSS) . The Clinical Neuropsychologist , 15 : 81 – 94 .
  • Attix , DK , Story , TJ , Chelune , GJ , Ball , JD , Stutts , ML Hart , RP . 2009 . The prediction of change: Normative neuropsychological trajectories . The Clinical Neuropsychologist , 23 : 21 – 38 .
  • Basso , M , Bornstein , R and Lang , J . 1999 . Practice effects on commonly used measures of executive function across twelve months . The Clinical Neuropsychologist , 13 : 283 – 292 .
  • Beglinger , L , Gaydos , B , Kareken , D , Tangphao-Daniels , O , Siemers , E and Mohs , R . 2004 . Neuropsychological test performance in healthy volunteers before and after Donepezil administration . Journal of Psychopharmacology , 18 : 102 – 108 .
  • Benedict , RHB and Zgaljardic , DJ . 1998 . Practice effects during repeated administrations of memory tests with and without alternate forms . Journal of Clinical and Experimental Neuropsychology , 20 : 339 – 352 .
  • Blumenthal , J , Mahanna , E , Madden , D , White , W , Croughwell , N and Newman , M . 1995 . Methodological issues in the assessment of neuropsychologic function after cardiac surgery . Annals of Thoracic Surgery , 59 : 1345 – 1350 .
  • Chelune , GJ . 2003 . “ Assessing reliable neuropsychological change ” . In Prediction in forensic and neuropsychology: New approaches to psychometrically sound assessment , Edited by: Franklin , R . Mahwah, NJ : Lawrence Erlbaum Associates Inc .
  • Chelune , GJ , Naugle , RI , Luders , H , Sedlak , J and Awad , IA . 1993 . Individual change after epilepsy surgery: Practice effects and base-rate information . Neuropsychology , 7 : 41 – 52 .
  • Cicerone , KD , Dahlberg , C , Kalmar , K , Langenbahn , DM , Malec , JF Bergquist , TF . 2000 . Evidence-based cognitive rehabilitation: Recommendations for clinical practice . Archives of Physical Medicine & Rehabilitation , 81 : 1596 – 1615 .
  • Cicerone , KD , Dahlberg , C , Malec , JF , Langenbahn , DM , Felicetti , T Kneipp , S . 2005 . Evidence-based cognitive rehabilitation: updated review of the literature from 1998 through 2002 . Archives of Physical Medicine & Rehabilitation , 86 : 1681 – 1692 .
  • Collie , A , Darby , DG , Falleti , MG , Silbert , BS and Maruff , P . 2002 . Determining the extent of cognitive change after coronary surgery: A review of statistical procedures . Annals of Thoracic Surgery , 73 : 2005 – 2011 .
  • Collie , A , Maruff , P , Darby , D and McStephen , M . 2003 . The effects of practice on the cognitive test performance of neurologically normal individuals assessed at brief test–retest intervals . Journal of the International Neuropsychological Society , 9 : 419 – 428 .
  • Courage , ML , Reynolds , GD and Richards , JE . 2006 . Infants’ attention to patterned stimuli: Developmental change from 3 to 12 months of age . Child Development , 77 : 680 – 695 .
  • Cullum , CM , Heaton , RK and Grant , I . 1991 . “ Psychogenic factors influencing neuropsychological performance: Somatoform disorders, factitious disorders, and malingering ” . In Forensic neuropsychology: Legal and scientific bases , Edited by: Doerr , HO and Carlin , AS . New York : Guilford Press .
  • Dikmen , SS , Heaton , R , Grant , I and Temkin , N . 1999 . Test–retest reliability and practice effects of Expanded Halstead-Reitan Neuropsychological Test Battery . Journal of the International Neuropsychological Society , 5 : 346 – 356 .
  • Dikmen , SS , Machamer , J , Temkin , N and McLean , A . 1990 . Neuropsychological recovery in patients with moderate to severe head injury: Two-year follow-up . Journal of Clinical and Experimental Neuropsychology , 12 : 507 – 519 .
  • Duff , K , Schoenberg , MR , Patton , D , Paulsen , JS , Bayless , JD Mold , J . 2005 . Regression-based formulas for predicting change in RBANS subtests with older adults . Archives of Clinical Neuropsychology , 20 : 281 – 290 .
  • Falleti , MG , Maruff , P , Collie , A and Darby , DG . 2006 . Practice effects associated with the repeated assessment of cognitive function using the CogState battery at 10-minute, one week and one month test–retest intervals . Journal of Clinical and Experimental Neuropsychology , 28 : 1095 – 1112 .
  • Farag , E , Chelune , GJ , Schubert , A and Mascha , EJ . 2006 . Is depth of anesthesia, as assessed by the Bispectral Index, related to postoperative cognitive dysfunction and recovery? . Anesthia & Analgesia , 103 ( 3 ) : 633 – 640 .
  • Feifer , SG . 2008 . Integrating response to intervention (RTI) with neuropsychology: A scientific approach to reading . Psychology in the Schools , 45 ( 9 ) : 812 – 825 .
  • Ghogawala , Z , Westerveld , M and Amin-Hanjani , S . 2008 . Cognitive outcomes after carotid revascularization: The role of cerebral emboli and hypoperfusion . Neurosurgery , 62 : 385 – 395 .
  • Green , P , Allen , LM and Astner , K . 1996 . The Word Memory Test: A user's guide to the oral and computer-administered forms , Durham, NC : CogniStat .
  • Greiffenstein , MF . 2009 . Clinical myths of forensic neuropsychology . The Clinical Neuropsychologist , 23 : 286 – 296 .
  • Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., Millis, S. R., & Conference Participants. (2009). American Academy of Clinical Neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23, 1093–1129.
  • Hinton-Bayre , AD . 2010 . Deriving reliable change statistics from test-retest normative data: Comparison of models and mathematical expressions . Archives of Clinical Neuropsychology , 25 : 244 – 256 .
  • Huizinga , M , Dolan , CV and van der Molen , MW . 2006 . Age-related change in executive function: Developmental trends and a latent variable analysis . Neuropsychologia , 44 ( 11 ) : 2017 – 2036 .
  • Iverson , G . 2001 . Interpreting change on the WAIS-III/WMS-III in clinical samples . Archives of Clinical Neuropsychology , 16 : 183 – 191 .
  • Ivnik , RJ , Smith , GE , Lucas , JA , Peterson , RC , Boeve , BF Kokmen , E . 1999 . Testing normal older adult people three or four times at 1- to 2-year intervals: Defining normal variance . Neuropsychology , 13 : 121 – 127 .
  • Kaufman , AS . 1990 . Assessing adolescent and adult intelligence , Boston : Allyn & Bacon .
  • Lezak , MD , Howieson , DB and Loring , DW . 2004 . Neuropsychological assessment, , 4th , New York : Oxford University Press .
  • Lopez , F , Menez , M and Hernandez-Guzman , L . 2005 . Sustained attention during learning activities: An observational study with preschool-children . Early Child Development and Care , 175 ( 2 ) : 131 – 138 .
  • Loring , D and Meador , K . 2004 . Cognitive side effects of antiepileptic drugs in children . Neurology , 62 : 872 – 877 .
  • Maassen , G , Bossema , E and Brand , N . 2009 . Reliable change and practice effects: Outcomes of various indices compared . Journal of Clinical and Experimental Neuropsychology , 31 : 339 – 352 .
  • Mahone , EM . 2005 . Measurement of attention and related functions in the preschool child . Mental Retardation and Developmental Disability Research Review , 11 : 216 – 225 .
  • McCaffrey , RJ , Duff , K and Westervelt , HJ . 2000a . Practitioner's guide to evaluating change with intellectual assessment instruments , New York : Kluwer Academic/Plenum Press .
  • McCaffrey , RJ , Duff , K and Westervelt , HJ . 2000b . Practitioner's guide to evaluating change with neuropsychological assessment instruments , New York : Kluwer Academic/Plenum Press .
  • McCaffrey , RJ , Ortega , A , Orsillo , SM , Nelles , WB and Haase , RF . 1992 . Practice effects in repeated neuropsychological assessments . The Clinical Neuropsychologist , 6 : 32 – 42 .
  • McCaffrey , RJ and Westervelt , HJ . 1995 . Issues associated with repeated neuropsychological assessments . Neuropsychology Review , 5 : 203 – 221 .
  • McCrea , M , Guskiewicz , K , Marshall , S , Barr , W , Randolph , C Cantu , RC . 2003 . Acute effects and recovery time following concussion in collegiate football players . Journal of the American Medical Association , 290 : 2556 – 2563 .
  • Moore , A , Stambrook , M , Hawryluk , G , Peters , L , Gill , D and Hymans , M . 1990 . Test–retest stability of the Wechsler Adult Intelligence Scale-Revised in the assessment of head-injury patients . Psychological Assessment: A Journal of Consulting and Clinical Psychology , 2 : 98 – 100 .
  • Rabiner , DL and Malone , PS . 2004 . The impact of tutoring on early reading achievement for children with and without attention problems . Journal of Abnormal Child Psychology , 32 ( 3 ) : 273 – 284 .
  • Rapport , LJ , Axelrod , BN , Theisen , ME , Brines , DB , Kalechstein , AD and Ricker , JH . 1997 . Relationship of IQ to verbal learning and memory; test and retest . Journal of Clinical and Experimental Neuropsychology , 19 : 655 – 666 .
  • Rapport , MD , Quinn , SO , DuPaul , GJ , Quinn , EP and Kelly , KL . 1989 . Attention deficit disorder with hyperactivity and methylphenidate: The effects of dose and mastery level on children's learning performance . Journal of Abnormal Child Psychology , 17 : 669 – 689 .
  • Rasmussen, L. Siersma, V., & The ISPOCD Group (2004). Postoperative cognitive dysfunction: True deterioration versus random variation. Acta Anesthesiologica Scandinavica, 48, 346–256.
  • Rothbart , MK and Posner , MI . 2001 . “ Mechanism and variation in the development of attentional networks ” . In Handbook of developmental cognitive neuroscience , Edited by: Nelson , CA and Luciana , M . 353 – 364 . Cambridge, MA : The MIT Press .
  • Shapiro , E , Lockman , L , Balthazor , M and Krivit , W . 1995 . Neuropsychological outcomes of several storage diseases with and without bone marrow transplantation . Journal of Inherited Metabolic Disease , 18 : 413 – 429 .
  • Sheese , BE , Rothbart , MK , Posner , MI , White , LK and Fraundorf , SH . 2008 . Executive attention and self-regulation in infancy . Infant Behavior & Development , 31 ( 3 ) : 501 – 510 .
  • Strauss , E , Hultsch , DF , Hunter , M , Slick , DJ , Patry , B and Levy-Bencheton , J . 1999 . Using intraindividual variability to detect malingering in cognitive performance . The Clinical Neuropsychologist , 13 : 420 – 432 .
  • Taylor , HG and Alden , J . 1997 . Age-related differences in outcome following childhood brain injury: An introduction and overview . Journal of the International Neuropsychological Society , 3 : 555 – 567 .
  • Wilson , RA , Watson , PC , Baddeley , AD , Emslie , H and Evans , JJ . 2000 . Improvement or simply practice? The effects of twenty repeated assessments of people with and without brain injury . Journal of the International Neuropsychological Society , 6 : 469 – 479 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.