7,556
Views
107
CrossRef citations to date
0
Altmetric
From the American Academy of Clinical Neuropsychology (AACN) and the National Academy of Neuropsychology (NAN)

Computerized Neuropsychological Assessment Devices: Joint Position Paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology

, , , , &
Pages 177-196 | Accepted 30 Jan 2012, Published online: 07 Mar 2012

Abstract

This joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology sets forth our position on appropriate standards and conventions for computerized neuropsychological assessment devices (CNADs). In this paper, we first define CNADs and distinguish them from examiner-administered neuropsychological instruments. We then set forth position statements on eight key issues relevant to the development and use of CNADs in the healthcare setting. These statements address (a) device marketing and performance claims made by developers of CNADs; (b) issues involved in appropriate end-users for administration and interpretation of CNADs; (c) technical (hardware/software/firmware) issues; (d) privacy, data security, identity verification, and testing environment; (e) psychometric development issues, especially reliability and validity; (f) cultural, experiential, and disability factors affecting examinee interaction with CNADs; (g) use of computerized testing and reporting services; and (h) the need for checks on response validity and effort in the CNAD environment. This paper is intended to provide guidance for test developers and users of CNADs that will promote accurate and appropriate use of computerized tests in a way that maximizes clinical utility and minimizes risks of misuse. The positions taken in this paper are put forth with an eye toward balancing the need to make validated CNADs accessible to otherwise underserved patients with the need to ensure that such tests are developed and utilized competently, appropriately, and with due concern for patient welfare and quality of care.

Introduction

The use of computerized neuropsychological assessment devices (CNADs) is receiving increasing attention in clinical practice, research, and clinical trials. There are several potential advantages of computerized testing including: (a) the capacity to test a large number of individuals quickly; (b) ready availability of assessment services without advance notice; (c) the ability to measure performance on time-sensitive tasks, such as reaction time, more precisely; (d) potentially reduced assessment times through the use of adaptive testing protocols; (e) reduced costs relating to test administration and scoring; (f) ease of administering measures in different languages; (g) automated data exporting for research purposes; (h) increased accessibility to patients in areas or settings in which professional neuropsychological services are scarce; and (i) the ability to integrate and automate interpretive algorithms such as decision rules for determining impairment or statistically reliable change.

CNADs range from stand-alone computer-administered versions of established examiner-administered tests (e.g., Wisconsin Card Sorting Test) to fully web-integrated testing stations designed for general (e.g., cognitive screening) or specific (e.g., concussion evaluation and management) applications. This has been a highly active research and development area, and new tests and findings are being released continuously (Crook, Kay, & Larrabee, Citation2009). Researchers have used computerized neuropsychological testing with numerous clinical groups across the lifespan. Examples include children with attention-deficit hyperactivity disorder (Bolfer et al., Citation2010; Chamberlain et al., Citation2011; Gualtieri & Johnson, Citation2006; Polderman, van Dongen, & Boomsma, Citation2011) or depression (Brooks, Iverson, Sherman, & Roberge, Citation2010); adults with psychiatric illnesses, such as depression or bipolar disorder (Iverson, Brooks, Langenecker, & Young, Citation2011; Sweeney, Kmiec, & Kupfer, Citation2000); and adolescents and young adults who sustain sport-related concussions (Bleiberg, Garmoe, Halpern, Reeves, & Nadler, Citation1997; Bleiberg et al., Citation2004; Broglio, Ferrara, Macciocchi, Baumbartner, & Elliott, 2007; Cernich, Reeves, Sun, & Bleiberg, Citation2007; Collie, Makdissi, Maruff, Bennell, & McCrory, Citation2006; Collins, Lovell, Iverson, Ide, & Maroon, Citation2006; Gualtieri & Johnson, Citation2008; Iverson, Brooks, Collins, & Lovell, Citation2006; Iverson, Brooks, Lovell, & Collins, Citation2006; Peterson, Stull, Collins, & Wang, Citation2009; Van Kampen, Lovell, Pardini, Collins, & Fu, Citation2006). CNADs have also been applied to adult epilepsy (Moore, McAuley, Long, & Bornstein, Citation2002), cardiovascular surgery (Raymond, Hinton-Bayre, Radel, Ray, & Marsh, Citation2006), neurocognitive problems encountered by active duty military service members and veterans (Anger et al., Citation1999; Marx et al., Citation2009; McLay, Spira, & Reeves, Citation2010; Retzlaff, Callister, & King, Citation1999; Vasterling et al., Citation2006), and mild cognitive impairment in older adults (Doniger et al., Citation2006; Dwolatzky et al., Citation2004; Gualtieri & Johnson, Citation2005; Tornatore, Hill, Laboff, & McGann, Citation2005; Wild, Howieson, Webbe, Seelye, & Kaye, Citation2008) or dementia (Doniger et al., Citation2005; Dorion et al., Citation2002; Wouters, de Koning et al., Citation2009; Wouters, Zwinderman, van Gool, Schmand, & Lindeboom, Citation2009). Computerized tests, sometimes administered as part of a predominantly examiner-administered battery, are also used to identify poor effort within the context of a comprehensive neuropsychological evaluation (Green, Rohling, Lees-Haley, & Allen, Citation2001; Slick et al., Citation2003). The potential application of CNADs to other medical and neuropsychiatric conditions seems limited only by available knowledge and recognition of neurocognitive symptoms seen in these disorders. For this reason, clinical application of CNADs is expected to increase in the coming years.

Computerized neuropsychological assessment is currently being used in many mainstream applications to which examiner-administered neuropsychological assessment has been historically applied. This paper describes the position of the American Academy of Clinical Neuropsychology (AACN) and the National Academy of Neuropsychology (NAN) with regards to key issues in the development, dissemination, and implementation of computerized neuropsychological tests in clinical practice.

Nature and Definition of Computerized Neuropsychological Assessment Devices

We define a “computerized neuropsychological testing device” as any instrument that utilizes a computer, digital tablet, handheld device, or other digital interface instead of a human examiner to administer, score, or interpret tests of brain function and related factors relevant to questions of neurologic health and illness. Although it is tempting to consider CNADs as directly comparable to examiner-administered tests, there are significant differences between the two approaches. First, it is important to recognize that even when a traditional examiner-administered test is programmed for computer administration, it becomes a new and different test. One obvious difference is in the patient interface. In examiner-centered approaches, the patient interacts with a person who presents stimuli, records verbal, motor, or written responses, and makes note of key behavioral observations. For a CNAD, examinees interact with a computer or tablet testing station through one or more alternative input devices (e.g., keyboard, voice, mouse, or touch screen), in some cases without supervision or observation by a test administrator. Also, some CNADs utilize an “adaptive” assessment approach derived from Item Response Theory (Reise & Waller, Citation2009; Thomas, Citation2010) wherein the program adjusts task difficulty or stimulus presentation as a function of task success or failure on the part of the examinee. This does not typically occur in examiner-centered approaches.

Second, whereas most examiner-administered tests require documentation of test user qualifications on purchase, some CNADs are advertised and marketed to end-users with no expertise in neuropsychological assessment or knowledge of psychometric principles. Many contain proprietary algorithms for calculating summary scores or indices from performance data, and some provide the end-user with boilerplate report language derived from the examinee's performance that is intended as an automated form of interpretation based solely on test metrics. Third, the responsible interpretation and reporting of results of CNADs requires an understanding of test utility and accuracy when installed and used in the local clinical setting, which in turn requires familiarity with many technical details regarding their psychometric properties and normative standards. How the installed program interacts with the user's unique software and hardware configuration may affect important parameters including timing accuracy, screen resolution or refresh rate, or the sensitivity of input devices. These and other issues discussed in this paper lead to the conclusion that CNADs are qualitatively and technically different from examiner-administered instruments and require best practices for their competent and safe use.

Key Issues and Position Statements on CNAD

(1) Device Marketing and Performance Claims

Position Statement

It is our position that CNADsFootnote1 are subject to, and should meet, the same standards for the development and use of educational, psychological, and neuropsychological tests (American Psychological Association, Citation1999) as are applied to examiner-administered tests. This position echoes similar statements made over 20 years ago by Kramer (Citation1987) and Matarazzo (Citation1985, Citation1986). In addition, CNADs likely qualify, and thus will eventually be regulated as, “medical devices” according to prevailing definitions in Federal law. As regulated devices, developers of computerized neuropsychological assessment tools will likely need to provide additional documentation that meets specific labeling standards particular to medical device regulation.

Discussion

Section 201(h) of the Federal Food, Drug & Cosmetic Act (FD&C; 21 U.S.C. 301) defines a medical device as “an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is…intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals …” This definition would appear to include CNADs. As the field of computerized assessment evolves, it is reasonable to assume that such tools will come under the regulatory authority of the Food and Drug Administration (FDA), the agency that, under federal law, regulates all drugs and medical devices.

When considering the use of a CNAD in a particular setting, end users need to know the answers to questions such as (a) What does the device claim to do?, (b) What does it actually do?, (c) How does it do what it claims to do?, and (d) Is it safe and effective? Such answers are critical in making informed decisions about the reliability, validity, and clinical utility of CNADs in any particular setting. The claims made by the developer are critical to the ability of the end user to evaluate the product. Some products may claim to be stand alone diagnostic tests for specific conditions whereas others may purport increased technical accuracy over examiner-administered neuropsychological assessment. The marketing claims and claims made in the documentation for the measure should be evaluated in light of the data presented and the technical information included in the manual, similar to the evaluation of examiner-administered neuropsychological measures. In addition, understanding the mechanism by which the measure provides diagnostic information or normative comparisons is critical for users of CNADs. Even if the algorithm is proprietary, the methodology must be understandable and transparent so that the user can evaluate the validity of the claim made.

Because some CNADs are intended for end-users who have no neuropsychological expertise, claims may be difficult to evaluate by the potential user. When users consider installing a CNAD in their practice or research program, key information about the device, its intended application, and its expected performance is needed. In this regard, it is important to distinguish among (a) “marketing” (the healthcare professional to whom the device is targeted), (b) “labeling” (the information about the device that is provided on packaging or accompanying inserts), (3) “use” (the intended application for the device), and (4) “documentation” (information that accompanies the device, including installation instructions, normative data, or information about device utility). Readers who consult device websites will discover broad variation in the extent to which such information is provided by developers of commercially available CNADs.

Regarding safety and efficacy, developers of CNADs should be explicit about how the test should be used in deriving diagnostic or prognostic statements. Some CNADs provide simple, easy-to-use reports that represent performance levels in color-coded fashion, with “red” indicating a cause for concern, and “green” suggesting normal results, similar in principle to the results of a metabolic panel. Though this simplifies the use of the device, it can obscure the need to consider other data (e.g., additional laboratory, neuroimaging, or clinical data) needed to establish a complex diagnosis and it contains little guidance regarding differential diagnostic considerations in the individual case. Test developers should provide sufficient data to allow users to determine if the CNAD has been previously applied to the problem/condition considered by the user, so that users can determine the suitability of the CNAD for their settings and needs. At the same time, users are expected to implement CNADs in a responsible and appropriate manner. For example, it would not be appropriate to implement a CNAD that had been developed for concussion assessment and management in a clinical practice for use as a dementia screening device in the absence of empirical data supporting its efficacy for that use.

(2) End-User Issues

Position Statement

Developers of CNADs are expected to provide a clear definition of the intended end-user population, including a description of the competencies and skills necessary for effective and accurate use of the device and the data it provides.

Discussion

Some devices are specifically intended as self-test instruments (with the examinee as the end-user), whereas others require test user qualifications similar to those imposed on examiner-administered neuropsychological tests. Still other CNADs are intended for use by health care providers who possess varying knowledge of psychometric principles and/or neuropsychological expertise. Although test administration is likely to be less affected by this lack of knowledge if appropriate orientation to the use of and training on the specific test is undertaken, interpretation of the data generated by the measure may be more substantially affected. Dependent on the intended use or application of the test, a lack of knowledge regarding psychometric properties of the measure, test behavior, associated medical or behavioral data to support interpretation, and neuropsychological expertise, may present a specific challenge to the general health care provider and create a risk to the patient with whom the test is used.

CNADs can be appropriately administered by paraprofessionals or technically trained staff who may lack the education, training, or experience necessary to integrate or interpret test results. However, unlike examiner-administered testing, many CNADs are intended to support clinical interpretations rendered by practitioners who have little or no expertise, training, or experience in psychometrics or clinical neuropsychology. The safe and competent use of CNADs requires a link to professionals trained in the use of psychometric techniques in the differential diagnostic setting. The appropriate process of test interpretation involves an integration of quantitative test findings with information from medical records, including disease course, functional impairment, comorbid illnesses, history, and other relevant factors. Also, an understanding that multiple factors separate from central nervous system disease or injury (e.g., premorbid abilities, general health, neuropsychiatric and emotional status, medications, fatigue, and effort) can affect performance on cognitive tests is critical to accurate interpretation of test results. Bypassing careful clinical interpretation may lead to potential misuse of the data or failure to consider potential clinical or methodological issues that could influence the results (American Psychological Association, Citation1986). This issue is also relevant to the application of CNADs in settings in which a professional is not available to make behavioral observations of the examinee during the testing session. Indeed, some CNADs are designed to be administered to a patient or client with minimal or no direct observation by a trained examiner, and some CNADs that do involve observation by the examiner are intended for use by professionals or paraprofessionals with limited education, training, or experience in neuropsychological assessment. In these instances, behavioral indicators of emotional, motivational, or mental status issues that might complicate test interpretation may be inadvertently missed.

Test developers have taken one of two broad approaches to the user qualifications issue. Some CNADs require appropriate licensure or certification in relevant healthcare fields (e.g., psychology, medicine), thus making the device available to an “expert user”. Another approach has been to develop an interpretive algorithm within the device software that essentially creates an “expert system”. In this approach, the program itself contains clinical actuarial routines that generate clinical findings and recommendations. However, this poses two challenges for end users: (a) they might lack the knowledge and training to independently evaluate the accuracy of the output and/or the claims the developer makes regarding the test results; and (b) the proprietary interpretive routines might be opaque or ill-defined methodologically, obscuring critical evaluation by the end user or scientific community. Test developers should make sufficient information available to the end user (without compromising proprietary information or trade secrets) so that an independent evaluation of the validity of the interpretive report can be made, and so that the utility of the test in actual clinical practice can be independently evaluated.

In considering the broader context in which CNADs are applied today, it is important to distinguish between neuropsychological testing (utilizing cognitive tests to obtain behavioral samples of abilities in memory, concentration, or executive function) and neuropsychological assessment (providing a comprehensive evaluation of an individual that integrates test results with history, symptoms, behavioral observations, physical findings, and other aspects of the examinee's situation to yield interpretive statements about the underlying causes of the patient's performance pattern; Matarazzo, Citation1990). The interpretation of CNAD results requires similar specialized training and expertise in clinical neuropsychology as the interpretation of examiner-administered neuropsychological tests. Moreover, the interpretation of computerized test results, like their examiner-administered counterparts, occurs in the context of knowledge of relevant information from the social, medical, mental health, educational, and occupational history of the examinee (Matarazzo, Citation1990). Because of this, the specific interpretive statements generated by CNAD software may not apply to the individual examinee. If applied in practice, these automated reports should be carefully reviewed by someone with expertise in neuropsychological test interpretation for accuracy and relevance related to each individual examinee in each specific case. Consistent with professional competence, clinicians “do not promote the use of psychological assessment techniques by unqualified persons, except when such use is conducted for training purposes with appropriate supervision” (APA, Citation2010, Ethical Standard 9.07, Assessment by Unqualified Persons).

(3) Technical (Hardware/Software/Firmware) Issues

Position Statement

Test developers should provide users with sufficient technical information to insure that the local installation of a CNAD will produce data that can be accurately compared to that which exists in the test's normative database.

Discussion

As is true of examiner-administered assessment instruments, CNADs are developed within a specific environment that helps define the domains to which the test and its results can be generalized. Technical aspects of the computing environment in which the test was developed may critically affect how the test performs when applied in clinical settings (Cernich, Brennana, Barker, & Bleiberg, Citation2007). Such aspects include the computer or tablet's operating system, the speed of the central processing unit (CPU), the amount of available memory, how the program clock interacts with the system clock (McInnes & Taylor, Citation2001), resolution and refresh rate of the display (Gofen & Mackeben, Citation1997), characteristics of the user interface (Gaver, Citation1991), and other aspects of the operating system environment (Forster & Forster, Citation2003; Myors, Citation1999; Plant & Turner, Citation2009; Plant, Hammond, & Turner, Citation2004). Even subtle differences between the performance of the test in standardization and its performance when locally installed on the user's computer, tablet, or handheld device can influence whether the test performs as advertised. For example, performance indices that rely on millisecond distinctions between groups become less discriminative if the operating environment is clouded by operating system interference, security verification, and/or commonly scheduled program updates that interfere with timing resolution (Creeger, Miller, & Paredes, Citation1990).

CNADs installed on a user's local machine should duplicate with sufficient accuracy the computing environment in which the normative performance data for the test were established. If this fidelity cannot be demonstrated or confirmed, users have reason to doubt the results of the CNAD. Test developers are expected to provide specific guidance to users that will enable them to determine whether their local installation meets certain technical criteria, including a clear description of the necessary hardware and software configuration, and a developer-provided diagnostic that allows the user to determine that the test has been properly installed and will operate with fidelity to the normative installation.

(4) Privacy, Data Security, Identity Verification, and Testing Environment

Position Statement

Ultimately, maintaining patient privacy and security is the responsibility of the healthcare professional who collects, stores, and/or transmits personal health information, and users of CNADs should have appropriate knowledge about information technology that assures that patient rights are protected. Maintaining this responsibility requires the provision of detailed information about how the CNAD collects and stores patient data. If data is to be transmitted to remote servers or databases for normative referencing or automated report generation, users need to understand how that data is protected from security intrusion, corruption, or other threats to data integrity and privacy. Test developers provide a procedure to verify the identity of examinees who complete a CNAD remotely.

Discussion

Some CNADs store patient data files on a local hard disk, whereas others utilize a “store and forward” web interface in which the patient's data are collected locally and then uploaded via a web connection after testing is completed (Cernich et al., Citation2007). Users need detailed information about security precautions in place when such data are transmitted, stored, and accessed, and what procedures are in place in the event of inadvertent data loss. Because users have legal obligations to examinees imposed by the HIPAA Security Rule, civil rights legislation, and ethical guidelines, they also need assurances about the security and privacy of data that are transmitted over the web to remote databases (American Psychological Association, Citation2007). Prevailing law and best practice requires the use of encryption technologies that offer a measure of protection from unauthorized intrusion. Users need to be informed about, and aware of, the unique characteristics of electronic data, how to protect privacy when transmitting data to remote sites, and challenges that exist in disposing of electronic data.

By design, CNADs can be administered remotely, and identity verification can pose particular challenges for implementation of computerized measures when used in this way. Though this does not pose particular problems for in-person administration of a CNAD, remote use, especially in situations where there is no proctor to verify identity, presents logistical and ethical issues (Naglieri, Drasgow, Schmit, et al., 2004). In internet based applications, individuals could be represented by accomplices or assisted by individuals to either feign or enhance their test performance. Even with security protocols that include provision of personal information to verify identity, an informed accomplice could assist. In certain settings (e.g., Department of Defense), systems are being developed wherein the person being assessed is identified by their personal identification verification card or common access card (CAC). This is a federal access card that includes password protection and personal information about the individual, including a biometric identifier (fingerprint; Department of Defense, 2008). Though these systems are restricted in nature, they may provide a template by which individuals could be identified for remote testing in a secure and authentic manner. Although it is not suggested that remote CNAD's must contain such sophisticated biometric identification routines, it is important that developers address identity verification concerns in a thoughtful way so that users can be reasonably assured that the remotely-assessed examinee is who s/he purports to be.

Remote testing, of course, creates special challenges relating to the reliability and validity of the results—given that it can be difficult to control the environment in which the administration occurs. For example, tasks that are dependent on precise presentation of stimuli or that require motor responses may be performed differently if the examinee is lounging on a couch using a laptop than if the examinee is seated in an office environment at a well-lit desk. Test developers should provide general guidance regarding characteristics of the testing environment that are reasonably likely to affect test performance so that users can advise examinees about such environmental considerations.

(5) Psychometric Development Issues

Position Statement

CNADs are subject to the same standards and conventions of psychometric test development, including descriptions of reliability, validity, and clinical utility (accuracy and diagnostic validity), as are examiner-based measures. Psychometric information should be provided to potential users of the CNAD in a manner that enables the user to ascertain the populations and assessment questions to which the test can be appropriately applied. Test developers should provide psychometric data relevant to the claimed purpose or application of the test. The actual data provided may vary depending on whether the test's claimed purpose is to provide a description of cognitive functions or domains versus assisting with the identification of the cognitive sequelae of specific diseases, injuries, or conditions. When established examiner-administered tests are offered in a computerized version, new psychometric data that describe the CNAD version are required. Information about how the data is scored, transformed, and analyzed to produce the CNADs output statistics should be provided with sufficient clarity so that users understand the meaning of the results they produce.

Discussion

Prevailing ethical standards (APA, Citation2010) state that, “Psychologists who develop tests and other assessment techniques use appropriate psychometric procedures and current scientific or professional knowledge for test design, standardization, validation, reduction or elimination of bias, and recommendations for use” (Standard 9.05). Although these standards are not binding on non-psychologist developers, the fact remains that, in order to be useful and meaningful in practice, all cognitive tests must meet minimum psychometric standards for reliability and validity. Reliability refers to the consistency of a test's output and pertains to both test scores and the clinical inferences derived from test scores (c.f., Franzen, Citation1989, Citation2000). Reliability can be evaluated through several kinds of evidence, including (a) consistency across test items (internal consistency), (b) consistency over time (test retest reliability or test stability), (c) consistency across alternate forms (alternate form reliability), and (d) consistency across raters (inter-rater reliability).

Validity refers to the degree to which a test measures the construct(s) it purports to measure. According to classical test theory (c.f., Downing & Haladyna, Citation2006), types of validity include (a) content validity, (b) criterion-related validity (e.g., concurrent and predictive validity), and (c) construct validity (e.g., convergent and discriminant validity). Validity may be defined as the extent to which theory and empirical evidence support the interpretation(s) of test scores when they are used as intended by the test developer or test publisher (American Psychological Association, Citation1999; Messick, Citation1989; Pedhazer & Pedhazer Schmelkin, Citation1991). In other words, validity is a property of the interpretation or meaning attached to a test score within a specific context of test usage, not a property of a given test (Cronbach, Citation1971, p. 447; c.f., Franzen, Citation1989; Franzen, Citation2000; Urbina, Citation2004).

Reliability and validity are not unitary psychometric constructs. Instead, they are measured in studies in different clinical contexts with diverse populations. Moreover, reliability and validity should be viewed as a matter of degree rather than in absolute terms, and tests must be re-evaluated as populations and testing contexts change over time (Nunnally & Bernstein, Citation1994). Developers of CNADs are encouraged to update their psychometric studies and their normative databases over time. Working knowledge of reliability and validity, and the factors that impact those psychometric constructs, is a central requirement for responsible and competent test use, whether the measure is used for diagnostic or research purposes.

Neuropsychological tests yield scores that are derived from a comparison of a person's performance to the performance of a healthy normative sample, clinical samples, one's own expected level of performance or, in the case of symptom validity tests, research participants who had been given specific instructions to perform in a certain manner. The quality and representativeness of normative data can have a major effect on the clinical interpretation of test scores (c.f., Mitrushina, Boone, Razani, & D'Elia, Citation2005). APA (Citation2010) Ethical Standard 9.02 (Use of Assessments), section (b) states, “Psychologists use assessment instruments whose validity and reliability have been established for use with members of the population tested. When such validity or reliability has not been established, psychologists describe the strengths and limitations of test results and interpretation.”

It cannot be assumed that the normative data obtained for an examiner-administered test apply equally well to a computerized version of the same test, due to changes in the method used to conduct the administration and variations in computer familiarity according to patient demographics. Studies of the comparability of computerized measures that are adaptations of examiner administered tests indicate that there are substantive differences in some samples (Berger, Chibnall, & Gfeller, Citation1997; Campbell et al., Citation1999; Choca & Morris, Citation1992; Ozonoff, Citation1995; Tien et al., Citation1996), further demonstrating the need for new normative data obtained with the computerized test. As we have indicated above, a computerized test adapted from an examiner administered test is a new test . As a result, it is essential that new normative data with adjustments for the pertinent demographic variables be established for computerized tests. The relevant standard from the APA Citation2010 Ethics Code, Standard 9.02 (Use of Assessments) section (a) states that, “Psychologists administer, adapt, score, interpret, or use assessment techniques, interviews, tests, or instruments in a manner and for purposes that are appropriate in light of the research on or evidence of the usefulness and proper application of the techniques.”

Prior to using tests diagnostically, it is important to have information relating to their accuracy for that purpose (Newman & Kohn, Citation2009). Operating characteristics (sensitivity, specificity, positive predictive power, and negative predictive power) are important and should be considered when using a test in a specific clinical setting. Sensitivity is the accuracy of a test for identifying a condition of interest (i.e., the proportion with the condition that is correctly identified); such cases are considered true positive results. Specificity is the proportion of people who do not have the condition who are correctly identified; such cases are considered true negative results. Positive predictive value (PPV) is the probability that a person has a disease or condition given a positive test result; that is, the proportion of individuals with positive test results who are correctly identified by the test. Negative predictive value (NPV) is the probability that a person does not have a disease or condition given a negative test result; that is, the proportion of individuals with negative test results who are correctly identified as not having the condition. PPV and NPV are related to the base rate, or prevalence in a given population, of the condition/disease that one is trying to identify. For example, a sensitive test may result in many false positive results (low PPV) if the prevalence of the condition of interest is low. Research relating to sensitivity, specificity, and diagnostic classification accuracy (Retzlaff & Gibertini, Citation2000) is an important foundation for the proper use of CNADs just as it is for examiner-administered neuropsychological tests. Research relating to diagnostic validity must be evaluated with a critical eye toward the actual diagnostic question to which the CNAD will be applied. For example, it is important to know whether a measure is useful in differentiating patients with dementia from neurologically intact individuals, or whether it can also be useful in making the more difficult distinction between those with dementia and those with mild cognitive impairment.

Data included in the technical manual should be appropriate to the use of the test intended by the developer. Dependent on the claim made, information related to reliability, validity, and diagnostic classification should be included to allow the user to evaluate the utility of that specific CNAD in fulfilling its claimed purpose. Also, test developers are encouraged to make clear to the user what other information is required in the applied context in order to use the test most appropriately. This can be done, for example, by relating performance on the test to prevailing diagnostic algorithms with proven validity in the clinical literature.

(6) Examinee Issues: Cultural, Experiential, and Disability Factors

Position Statement

Test developers should provide appropriate normative information that allows the user to determine whether the CNAD can be given to patients from different racial, ethnic, age, and educational backgrounds. Some patients with cognitive, motor, or sensory disabilities might have difficulty completing a computerized test in the manner intended by the developer. In addition, individual differences in computer use and familiarity may affect how examinees interact with devices, utilize response modalities, or respond to stimuli. Test developers are encouraged to provide documentation that such factors have been accounted for during test standardization and validation, and should provide guidance to users with regard to how motor, sensory, or cognitive impairment in targeted patient populations may affect their performance on the test. It is particularly important to specify conditions under which the test should not be used in patients with motor, sensory, or cognitive impairment.

Discussion

As with examiner-administered neuropsychological assessment, computerized testing has limitations regarding the scope of information that can be obtained and the validity of data that are collected. One key aspect of this issue is how the physical, psychiatric, or neurologic condition of the patient affects his or her ability to interact with the computer interface. For example, computerized assessment places demands on the examinee's academic skills (e.g., reading), cognitive function (e.g., attention), and sensory and motor functioning (e.g., rapid reaction time) that may influence the results if the examinee has disabilities or limitations in one or more of these areas. If the examinee does not comprehend task instructions, the results of the test will not be valid, and if the program requires speeded motor responses as a proxy for cognitive processing speed, patients with bradykinesia, tremors, or hemiparesis may be significantly compromised for reasons apart from impairments in the targeted construct. With the hemiparetic patient, validity or reliability of the measure might be diminished if they use their non-dominant hand to manipulate the mouse or provide a motor response. As with examiner-administered tests, numerous similar examples can be envisioned that complicate the quality of the data that emerge from CNADs (Hitchcock, Citation2006).

A key issue with many CNADs is that they may not include plans for consistent observation of the examinee by a trained examiner. Therefore, clinically useful information may be missed relating to task engagement, display of emotion, frustration, or tendency to give up easily when confronted with more challenging test items. Significant individual differences exist in computer use and familiarity (Iverson, Brooks, Ashton, Johnson, & Gualtieri, 2009), and results from computerized versus examiner-administered testing may be different in computer-competent versus computer-naïve populations (Feldstein et al., Citation1999).

Computerized assessment is constrained by the current hardware and software limitations of the field. Consequently, assessment of some important and sensitive aspects of cognitive functioning, such as free recall (vs. recognition) memory, expressive language, visual-constructional skills, and executive functioning may be difficult to incorporate into a CNAD. Clinicians utilizing CNADs in practice are responsible for recognizing the limitations of this testing approach and for appropriately documenting the impact such factors may have upon their findings. In situations involving examinees who require special testing accommodations as a result of sensorimotor limitations, aphasia, dyslexia, confusion, or variable cooperation, or in those instances that require the assessment of individuals who are less facile or comfortable with computers and tablets, examiner-administered testing may be advantageous or preferred.

(7) Use of Computerized Testing and Reporting Services

Position Statement

Professionals “select scoring and interpretation services (including automated services) on the basis of evidence of the validity of the program and procedures as well as on other appropriate considerations” (APA, Citation2010, Ethical Standard 9.09, Test Scoring and Interpretation Services, section b). Those “who offer assessment or scoring services to other professionals accurately describe the purpose, norms, validity, reliability, and applications of the procedures and any special qualifications applicable to their use” (APA, Citation2010, Ethical Standard 9.09, Test Scoring and Interpretation Services, section a). Professionals “retain responsibility for the appropriate application, interpretation, and use of assessment instruments, whether they score and interpret such tests themselves or use automated or other services” (APA, Citation2010, Ethical Standard 9.09, Test Scoring and Interpretation Services, section c).

Discussion

Professionals who lack training and expertise in clinical assessment might be tempted to simply accept the content of automated reports that provide descriptive or interpretive summaries of test results, and to incorporate textual output from CNADs into their standard clinical reports. This might occur because clinicians assume, uncritically or without sufficient evidence, that such summaries accurately reflect an individual patient's status and that the scientific bases of such interpretations have been established in the clinical setting. Practitioners are encouraged to evaluate the accuracy and utility of automated clinical reports in light of the total corpus of information available on the patient, including symptom reports, functional abilities, personal and family history, and other relevant factors. Automated reports are best viewed as an important resource for knowledgeable professionals, rather than as a substitute for necessary and sufficient expertise.

(8) Checks on Validity of Responses and Results

Position Statement

Examinee compliance, cooperation, and sufficient motivation are essential to the process of obtaining valid neuropsychological test data (American Academy of Clinical Neuropsychology, Citation2007; Bush et al., Citation2005; Heilbronner, Sweet, Morgan, Larrabee, & Millis, Citation2009). Developers of CNADs are encouraged to address these issues during test development and standardization. It is important for test developers to consider carefully the role of motivation and effort when conducting computerized testing. This is particularly true for CNADs intended for use by professionals unfamiliar with the signs and consequences of reduced effort on cognitive test performance. Test developers are encouraged to (a) provide information on how poor effort can be identified by patterns of performance on the CNAD, or (b) make specific recommendations about additional tests or procedures that can be concurrently conducted to evaluate examinee effort.

Discussion

Over the past few decades, research on effort and its effects on the validity of neuropsychological test results has dominated forensic neuropsychology, and significant interest has been devoted to understanding the role of effort and motivation in producing impairments on a wide variety of neuropsychological tests (Boone, Citation2007; Larrabee, Citation2007; Sweet, King, Malina, Bergman, & Simmons, Citation2002). Effort has been shown to substantially influence neurocognitive test scores, and in some studies, the variance attributable to effort is greater than that attributable to injury severity or other variables more directly related to underlying pathophysiology (Constantinou, Bauer, Ashendorf, Fisher, & McCaffrey, Citation2005; Green, Rohling, Lees-Haley, & Allen, Citation2001; Stevens, Friedel, Mehen, & Merten, Citation2008; West, Curtis, Greve, & Bianchini, Citation2011). These findings lead to the inescapable conclusion that carefully considering patient motivation and effort is a mainstream part of clinical practice (Boone, Citation2009). Without some form of assurance that the examinee has put forth adequate effort in completing neuropsychological tests, the clinician cannot interpret low test scores as evidence of impairment in a particular neurocognitive ability.

The assessment of effort requires the use of empirically derived indicators. Behavioral observations made during testing by a trained examiner are also useful but may suffice only when the lack of cooperation and effort affects overt behavior. Test developers are encouraged to provide users with procedural guidance about how to identify poor effort on the CNAD. This can be done by documenting a built-in measure of effort that has been appropriately validated within the CNAD, or by providing specific recommendations regarding other validated tests of effort that should be administered along with the CNAD.

Conclusions

This position paper is intended to provide guidance for test developers and users of CNADs that will promote accurate and appropriate use of computerized tests in a way that maximizes clinical utility and minimizes risks of misuse. We fully recognize the tension that exists between industry and professional users in bringing CNADs to market. On the one hand, there is substantial need to improve access to neurocognitive testing for underserved patients who, by virtue of economic, socioeconomic, geographical, logistical, or cultural reasons are not referred for, or cannot access, needed services. On the other hand, the development of CNADs is a complex enterprise; the tests themselves measure complex constructs, the technical and information technology issues that ensure appropriate installation of the test in the local environment are nuanced, and the manner in which different patient groups interact with the assessment device introduces important sources of variance that can affect the interpretation of the test results.

Although there are clear differences between stand-alone computerized platforms of common examiner-administered tests (e.g., Wisconsin Card Sorting Test) and full-fledged computerized testing systems, all developers of CNADs are encouraged to provide users with core information regarding (a) test reliability, validity, accuracy, and utility; (b) technical specifications, including how to insure that the local installation faithfully duplicates the environment in which normative data was collected; (c) methods to protect privacy and data integrity; (d) the minimal qualifications of those who can install, administer, or interpret the test; (e) further requirements regarding utilization of computerized or actuarial reporting services; (f) information on who can and cannot benefit from undergoing assessment; (g) what the test claims to be able to do for the patient and/or professional user and (h) guidance with regard to how submaximal effort affects test results and how to interpret results when the examinee intentionally or unintentionally underperforms due to reasons other than neurocognitive compromise.

Computerized neuropsychological assessment devices (both individual tests and test batteries) are expected to meet the same psychometric standards of adequate reliability and validity for the intended clinical populations as examiner-administered neuropsychological tests. Adaptation of an examiner-administered test for computers or tablets should be accompanied by the development of normative or equivalency data for the computerized version; a computerized version is a new test and not merely a slightly different format for an existing test. Expertise in the interpretation of computerized tests requires advanced knowledge of testing theory and the complex interaction of multiple factors that can affect performance on cognitive tests, aside from putative or clearly established injury to the brain. Such expertise is typically obtained from specialized education, training, and experience in clinical neuropsychology.

Qualified test users understand that CNAD results must be interpreted in the context of relevant history, other test findings, and data available from other disciplines. For test results to be considered valid, all neuropsychological testing, including computerized testing, requires adequate motivation and cooperation from examinees.

It is clear that the competent use of appropriately developed computerized neuropsychological measures will serve an increasingly important role in the evaluation of a variety of patient populations. The use of CNADs clearly has a role in bringing valid and effective neuropsychological evaluation techniques to underserved populations. However, such application should proceed with an understanding that effective use of such techniques is not “plug and play”, but in fact requires attention to a broad range of factors that determine whether the test will be useful, accurate, and appropriate in the intended setting. Users and consumers of CNADs must be mindful that ethical and clinically useful practice requires that such tests meet appropriate quality and efficacy criteria, and that those employing CNADs have the education, training, and experience necessary to interpret their results in a manner that will best meet the needs of the patients they serve.

Acknowledgements

The authors thank NAN Policy and Planning Committee members Shane Bush, Ph.D., William MacAllister, Ph.D., Thomas Martin, Ph.D., and Michael Stutts, Ph.D. for their review and suggestions regarding this article.

Disclosures:

Russell M. Bauer, Ph.D. is supported in part by grants UL1 RR029890 to the University of Florida. Grant Iverson has led or been a member of research teams that have received grant funding from test publishing companies, the pharmaceutical industry, and the Canadian government to study the psychometrics of computerized and traditional neuropsychological tests. These companies include AstraZeneca Canada, Lundbeck Canada, Pfizer Canada, ImPACT Applications, Inc., CNS Vital Signs, Psychological Assessment Resources (PAR, Inc.), and the Canadian Institute of Health Research. He is also a co-author on a test published by PAR, Inc. Alison Cernich's views are her own and do not necessarily represent the views of the Department of Veteran's Affairs (VA). The VA had no role in the writing of the article or the decision to submit it for publication. Ronald Ruff has published four tests with Psychological Assessment Resources, Inc. Laurence Binder is the author and publisher of the Portland Digit Recognition Test.

Notes

1The authors suggest, in the event that specific position statements are quoted in secondary sources, that the acronym “CNAD” be spelled out as “computerized neuropsychological assessment devices” for clarity in the secondary source.

References

  • American Academy of Clinical Neuropsychology . 2007 . American Academy of Clinical Neuropsychology (AACN) practice guidelines for neuropsychological assessment and consultation . The Clinical Neuropsychologist , 21 ( 2 ) : 209 – 231 .
  • American Psychological Association . 2007 . Record keeping guidelines . American Psychologist , 62 : 993 – 1004 .
  • American Psychological Association . 1999 . Standards for educational and psychological testing , Washington , DC : American Psychological Association .
  • American Psychological Association . 1986 . Guidelines for computer-based tests and interpretations , Washington , DC : American Psychological Association .
  • Anger , WK , Storzbach , D , Binder , LM , Campbell , KA , Rohlman , DS McCauley , L . 1999 . Neurobehavioral deficits in Persian Gulf veterans: evidence from a population-based study. Portland Environmental Hazards Research Center . Journal of the International Neuropsychological Society , 5 ( 3 ) : 203 – 212 .
  • APA (2010). Ethical Principles of Psychologists and Code of Conduct (2010 Amendments), from http://www.apa.org/ethics/code/index.aspx
  • Berger , SG , Chibnall , JT and Gfeller , JD . 1997 . Construct validity of the computerized version of the Category Test . Journal of Clinical Psychology , 53 ( 7 ) : 723 – 726 .
  • Bleiberg , J , Cernich , AN , Cameron , K , Sun , W , Peck , K Ecklund , PJ . 2004 . Duration of cognitive impairment after sports concussion . Neurosurgery , 54 ( 5 ) : 1073 – 1078 . , discussion 1078–1080
  • Bleiberg , J , Garmoe , WS , Halpern , EL , Reeves , DL and Nadler , JD . 1997 . Consistency of within-day and across-day performance after mild brain injury . Neuropsychiatry, Neuropsychology, and Behavioral Neurology , 10 ( 4 ) : 247 – 253 .
  • Bolfer , C , Casella , EB , Baldo , MV , Mota , AM , Tsunemi , MH Pacheco , SP . 2010 . Reaction time assessment in children with ADHD . Arquivos de Neuro-Psiquiatria , 68 ( 2 ) : 282 – 286 .
  • Boone , KB . 2009 . The need for continuous and comprehensive sampling of effort/response bias during neuropsychological examinations . The Clinical Neuropsychologist , 23 : 729 – 741 .
  • Boone , KB . (Ed.) (2007). Assessment of feigned cognitive impairment: A neuropsychological perspective. New York: Guilford Press
  • Broglio , SP , Ferrara , MS , Macciocchi , SN , Baumbartner , TA and Elliott , R . 2007 . Test-retest reliability of computerized concussion assessment programs . Journal of Athletic Training , 42 : 509 – 514 .
  • Brooks , BL , Iverson , GL , Sherman , EM and Roberge , MC . 2010 . Identifying cognitive problems in children and adolescents with depression using computerized neuropsychological testing . Applied Neuropsychology , 17 ( 1 ) : 37 – 43 .
  • Bush , SS , Ruff , RM , Troster , AI , Barth , JT , Koffler , SP Pliskin , NH . 2005 . Symptom validity assessment: practice issues and medical necessity NAN policy & planning committee . Archives of Clinical Neuropsychology , 20 ( 4 ) : 419 – 426 .
  • Campbell , KA , Rohlman , DS , Storzbach , D , Binder , LM , Anger , WK Kovera , CA . 1999 . Test-retest reliability of psychological and neurobehavioral tests self-administered by computer . Assessment , 6 ( 1 ) : 21 – 32 .
  • Cernich , A , Reeves , D , Sun , W and Bleiberg , J . 2007 . Automated Neuropsychological Assessment Metrics sports medicine battery . Archives of Clinical Neuropsychology , 22 ( Suppl 1 ) : S101 – 114 .
  • Cernich , AN , Brennana , DM , Barker , LM and Bleiberg , J . 2007 . Sources of error in computerized neuropsychological assessment . Archives of Clinical Neuropsychology , 22S : S39 – S48 .
  • Chamberlain , SR , Robbins , TW , Winder-Rhodes , S , Muller , U , Sahakian , BJ Blackwell , AD . 2011 . Translational approaches to frontostriatal dysfunction in attention-deficit/hyperactivity disorder using a computerized neuropsychological battery . Biological Psychiatry , 69 ( 12 ) : 1192 – 1203 .
  • Choca , J and Morris , J . 1992 . Administering the Category Test by computer: Equivalence of results . The Clinical Neuropsychologist , 6 ( 1 ) : 9 – 15 .
  • Collie , A , Makdissi , M , Maruff , P , Bennell , K and McCrory , P . 2006 . Cognition in the days following concussion: comparison of symptomatic versus asymptomatic athletes . Journal of Neurology, Neurosurgery, and Psychiatry , 77 ( 2 ) : 241 – 245 .
  • Collins , MW , Lovell , MR , Iverson , GL , Ide , T and Maroon , J . 2006 . Examining concussion rates and return to play in high school football players wearing newer helmet technology: a three year prospective cohort study . Neurosurgery , 58 ( 2 ) : 275 – 286 .
  • Constantinou , M , Bauer , L , Ashendorf , L , Fisher , JM and McCaffrey , RJ . 2005 . Is poor performance on recognition memory effort measures indicative of generalized poor performance on neuropsychological tasks? . Archives of Clinical Neuropsychology , 20 : 191 – 198 .
  • Creeger , CP , Miller , KF and Paredes , DR . 1990 . Micromanaging time: Measuring and controlling timing errors in computer-controlled experiments . Behavior Research Methods, Instruments, and Computers , 22 : 34 – 79 .
  • Cronbach , LJ . 1971 . “ Test validation ” . In Educational measurement, , 2nd , Edited by: Thorndike , RL . 443 – 507 . Washington , DC : American Council on Education .
  • Crook , TH , Kay , GG and Larrabee , GJ . 2009 . “ Computer-based cognitive testing ” . In Neuropsychological Asessment of Neuropsychiatric and Neuromedical Disorders, , 3rd , Edited by: Grant , I and Adams , KM . 84 – 100 . New York : Oxford University Press .
  • Doniger , GM , Dwolatzky , T , Zucker , DM , Chertkow , H , Crystal , H Schweiger , A . 2006 . Computerized cognitive testing battery identifies mild cognitive impairment and mild dementia even in the presence of depressive symptoms . American Journal of Alzheimer's Disease and Other Dementias , 21 ( 1 ) : 28 – 36 .
  • Doniger , GM , Zucker , DM , Schweiger , A , Dwolatzky , T , Chertkow , H Crystal , H . 2005 . Towards practical cognitive assessment for detection of early dementia: a 30-minute computerized battery discriminates as well as longer testing . Current Alzheimer Research , 2 ( 2 ) : 117 – 124 .
  • Dorion , AA , Sarazin , M , Hasboun , D , Hahn-Barma , V , Dubois , B Zouaoui , A . 2002 . Relationship between attentional performance and corpus callosum morphometry in patients with Alzheimer's disease . Neuropsychologia , 40 ( 7 ) : 946 – 956 .
  • Downing , SM . & Haladyna, T. M. (Eds.) (2006). Handbook of test development. Mahwah, NJ: Lawrence Erlbaum Associates Publishers
  • Dwolatzky , T , Whitehead , V , Doniger , GM , Simon , ES , Schweiger , A Jaffe , D . 2004 . Validity of the Mindstreams computerized cognitive battery for mild cognitive impairment . Journal of Molecular Neuroscience , 24 ( 1 ) : 33 – 44 .
  • Feldstein , SN , Keller , FR , Protman , RE , Durham , RL , Klebe , KJ and Davis , HP . 1999 . A comparison of computerized and standard version of the Wisconsin Card Sorting Test . The Clinical Neuropsychologist , 13 : 303 – 313 .
  • Forster , KI and Forster , JC . 2003 . DMDX: A Windows display proram with millisecond accuracy . Behavior Research Methods, Instruments, and Computers , 35 : 116 – 124 .
  • Franzen , MD . 1989 . Reliability and validity in neuropsychological assessment , New York , NY : Plenum Press .
  • Franzen , MD . 2000 . Reliability and validity in neurological assessment, , 2nd , New York , NY : Kluwer Academic/Plenum Press .
  • Gaver , WW . 1991 . “ Technology affordances ” . In In CHI 1991 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Reaching through Technology , New York : Association for Computing Machinery .
  • Gofen , A and Mackeben , M . 1997 . An introduction to accurate display timing for PC's under “Windows” . Spatial Vision , 10 ( 4 ) : 361 – 368 .
  • Green , P , Rohling , ML , Lees-Haley , PR and Allen , LM . 2001 . Effort has a greater effect on test scores than severe brain injury in compensation claimants . Brain Injury , 15 : 1045 – 1060 .
  • Green , P , Rohling , ML , Lees-Haley , PR and Allen , LM 3rd . 2001 . Effort has a greater effect on test scores than severe brain injury in compensation claimants . Brain Injury , 15 ( 12 ) : 1045 – 1060 .
  • Gualtieri , CT and Johnson , LG . 2005 . Neurocognitive testing supports a broader concept of mild cognitive impairment . American Journal of Alzheimer's Disease and Other Dementias , 20 ( 6 ) : 359 – 366 .
  • Gualtieri , CT and Johnson , LG . 2006 . Efficient allocation of attentional resources in patients with ADHD: maturational changes from age 10 to 29 . Journal of Attentional Disorders , 9 ( 3 ) : 534 – 542 .
  • Gualtieri , CT and Johnson , LG . 2008 . A computerized test battery sensitive to mild and severe brain injury . Medscape Journal of Medicine , 10 ( 4 ) : 90
  • Heilbronner , RL , Sweet , JJ , Morgan , JE , Larrabee , GJ and Millis , SR . 2009 . American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering . The Clinical Neuropsychologist , 23 ( 7 ) : 1093 – 1129 .
  • Heilbronner , RL , Sweet , JJ , Morgan , JE , Larrabee , GJ , Millis , SR and Participants , C . 2009 . American Academy of Clinical Neuropsychology Consensus Conference statement on the neuropsychological assessment of effort, response bias, and malingering . The Clinical Neuropsychologist , 23 ( 7 ) : 1093 – 1129 .
  • Hitchcock , E . 2006 . Computer access for people after stroke . Topics in Stroke Rehabilitation , 13 : 22 – 30 .
  • Iverson , GL , Brooks , BL , Ashton , VL , Johnson , LG and Gualtieri , CT . 2009 . Does familiarity with computers affect computerized neuropsychological test performance? . Journal of Clinical and Experimental Neuropsychology , 31 : 594 – 604 .
  • Iverson , GL , Brooks , BL , Collins , MW and Lovell , MR . 2006 . Tracking neuropsychological recovery following concussion in sport . Brain Injury , 20 ( 3 ) : 245 – 252 .
  • Iverson , GL , Brooks , BL , Langenecker , SA and Young , AH . 2011 . Identifying a cognitive impairment subgroup in adults with mood disorders . Journal of Affective Disorders , 132 : 360 – 367 .
  • Iverson , GL , Brooks , BL , Lovell , MR and Collins , MW . 2006 . No cumulative effects for one or two previous concussions . British Journal of Sports Medicine , 40 ( 1 ) : 72 – 75 .
  • Kramer , JJ . 1987 . On the question of professional standards for computer-based test interpretation . American Psychologist , 42 : 889 – 890 .
  • Larrabee , GJ . 2007 . Assessment of malingered neuropsychological deficits , New York : Oxford University Press .
  • Marx , BP , Brailey , K , Proctor , SP , MacDonald , HZ , Graefe , AC Amoroso , P . 2009 . Association of Time Since Deployment, Combat Intensity, and Posttraumatic Stress Symptoms With Neuropsychological Outcomes Following Iraq War Deployment . Archives of General Psychiatry , 66 ( 9 ) : 996 – 1004 .
  • Matarazzo , JD . 1985 . Clinical psychological tst interpretations by computer: Hardware outpaces software . Computers in Human Behavior , 1 : 235 – 253 .
  • Matarazzo , JD . 1986 . Computerized clinical psychological test interpretations: Unvalidated plue all mean and no sigma . American Psychologist , 41 : 96
  • Matarazzo , JD . 1990 . Psychological assessment versus psychological testing: Validation from Binet to the school, clinic, and courtroom . American Psychologist , 45 : 999 – 1017 .
  • McInnes , WJ and Taylor , TL . 2001 . Millisecond timing on PC's and Macs . Behavior Research Methods Instruments and Computers , 31 ( 1 ) : 129 – 136 .
  • McLay , R , Spira , J and Reeves , D . 2010 . Use of computerized neuropsychological testing to help determine fitness to return to combat operations when taking medication that can influence cognitive function . Military Medicine , 175 ( 12 ) : 945 – 946 .
  • Messick , S . 1989 . “ Validity ” . In Educational measurement, , 3rd , Edited by: Linn , RL . 13 – 103 . New York : American Council on Education and Macmillan .
  • Mitrushina , M , Boone , KB , Razani , J and D'Elia , LF . 2005 . Handbook of normative data for neuropsychological assessment, , 2nd , New York : Oxford University Press .
  • Moore , JL , McAuley , JW , Long , L and Bornstein , R . 2002 . An Evaluation of the Effects of Methylphenidate on Outcomes in Adult Epilepsy Patients . Epilepsy & Behavior , 3 ( 1 ) : 92 – 95 .
  • Myors , B . 1999 . Timing accuracy of PC programs running under DOS and Windows . Behavior Research Methods, Instruments, & Computers , 31 : 322 – 328 .
  • Naglieri , JA , Drasgow , F , Schmit , M , Handler , L , Prifitera , A , Margolis , A and Velasquez , R . Psychological testing on the internet: New problems, old issues . American Psychologist , 59 ( 3 ) 150 – 162 . (date)
  • Newman , TB and Kohn , MR . 2009 . Evidence-based diagnosis , New York : Cambridge University Press .
  • Nunnally , JC and Bernstein , IH . 1994 . Psychometric theory, , 3rd , New York : McGraw-Hill, Inc .
  • Ozonoff , D . 1995 . Environmental medicine for all: getting there form here . Lancet , 346 ( 8979 ) : 860
  • Pedhazer , EJ and Pedhazer Schmelkin , L . 1991 . Measurement, design, and analysis: An integrated approach , Hillsdale , NJ : Lawrence Erlbaum Associates Publishers .
  • Peterson , SE , Stull , MJ , Collins , MW and Wang , HE . 2009 . Neurocognitive function of emergency department patients with mild traumatic brain injury . Annals of Emergency Medicine , 53 ( 6 ) : 796 – 803 e791 .
  • Plant , RR and Turner , G . 2009 . Millisecond precision psychological research in a world of commodity computers: new hardware, new problems? . Behavior Research Methods , 41 ( 3 ) : 598 – 614 .
  • Plant , RR , Hammond , N and Turner , G . 2004 . Self-validating presentation and response timing in cognitive paradigms: how and why? . Behavioral Research Methods, Instrumentation, and Computers , 36 ( 2 ) : 291 – 303 .
  • Polderman , TJ , van Dongen , J and Boomsma , DI . 2011 . The relation between ADHD symptoms and fine motor control: a genetic study . Child Neuropsychology , 17 ( 2 ) : 138 – 150 .
  • Raymond , PD , Hinton-Bayre , AD , Radel , M , Ray , MJ and Marsh , NA . 2006 . Assessment of statistical change criteria used to define significant change in neuropsychological test performance following cardiac surgery . European Journal of Cardiothoracic Surgery , 29 ( 1 ) : 82 – 88 .
  • Reise , SP and Waller , NG . 2009 . Item response theory and clinical measurement . Annual Review of Clinical Psychology , 5 : 27 – 48 .
  • Retzlaff , PD , Callister , JD and King , RE . 1999 . Clinical procedures for the neuropsychological evaluation of U.S. Air Force pilots . Military Medicine , 164 ( 7 ) : 514 – 519 .
  • Retzlaff , PD and Gibertini , M . 2000 . “ Neuropsychometric issues and problems ” . In Clinician's guide to neuropsychological assessment , Edited by: Vanderploeg , RD . 277 – 299 . Mahwah , NJ : Lawrence Erlbaum Associates, Publishers .
  • Slick , DJ , Tan , JE , Strauss , E , Mateer , CA , Harnadek , M and Sherman , EM . 2003 . Victoria Symptom Validity Test scores of patients with profound memory impairment: nonlitigants case studies . The Clinical Neuropsychologist , 17 ( 3 ) : 390 – 394 .
  • Stevens , A , Friedel , E , Mehen , G and Merten , T . 2008 . Malingering and uncooperativeness in psychiatric and psychological assessment: prevalance and effects in a German sample of claimants . Psychiatric Research , 157 : 191 – 200 .
  • Sweeney , JA , Kmiec , JA and Kupfer , DJ . 2000 . Neuropsychologic impairments in bipolar and unipolar mood disorders on the CANTAB neurocognitive battery . Biological Psychiatry , 48 ( 7 ) : 674 – 684 .
  • Sweet , JJ , King , JH , Malina , AC , Bergman , MA and Simmons , A . 2002 . Documenting the presence of forensic neuropsychology at national meetings and in relevant professional journals from 1990 to 2000 . The Clinical Neuropsychologist , 16 : 481 – 494 .
  • Thomas , ML . (2010). The value of Item Response Theory in clinical assessment: A review. Assessment, 18(3), 291–307
  • Tien , AY , Spevack , TV , Jones , DW , Pearlson , GD , Schlaepfer , TE and Strauss , ME . 1996 . Computerized Wisconsin Card Sorting Test: comparison with manual administration . The Kaohsiung Journal of Medical Sciences , 12 ( 8 ) : 479 – 485 .
  • Tornatore , JB , Hill , E , Laboff , JA and McGann , ME . 2005 . Self-administered screening for mild cognitive impairment: initial validation of a computerized test battery . Journal of Neuropsychiatry and Clinical Neurosciences , 17 ( 1 ) : 98 – 105 .
  • Urbina , S . 2004 . Essentials of psychological testing , Hoboken , NJ : John Wiley & Sons .
  • Van Kampen , DA , Lovell , MR , Pardini , JE , Collins , MW and Fu , FH . 2006 . The “value added” of neurocognitive testing after sports-related concussion . American Journal of Sports Medicine , 34 ( 10 ) : 1630 – 1635 .
  • Vasterling , JJ , Proctor , SP , Amoroso , P , Kane , R , Heeren , T and White , RF . 2006 . Neuropsychological outcomes of army personnel following deployment to the Iraq war . Journal of the American Medical Association , 296 ( 5 ) : 519 – 529 .
  • West , LK , Curtis , KL , Greve , KW and Bianchini , KJ . 2011 . Memory in traumatic brain injury: the effects of injury severity and effort on the Wechsler Memory Scale-III . Journal of Neuropsychology , 5 : 114 – 125 .
  • Wild , K , Howieson , D , Webbe , F , Seelye , A and Kaye , J . 2008 . Status of computerized cognitive testing in aging: a systematic review . Alzheimer's & Dementia , 4 ( 6 ) : 428 – 437 .
  • Wouters , H , de Koning , I , Zwinderman , AH , van Gool , WA , Schmand , B Buiter , M . 2009 . Adaptive cognitive testing in cerebrovascular disease and vascular dementia . Dementia and Geriatric Cognitive Disorders , 28 ( 5 ) : 486 – 492 .
  • Wouters , H , Zwinderman , AH , van Gool , WA , Schmand , B and Lindeboom , R . 2009 . Adaptive cognitive testing in dementia . International Journal of Methods in Psychiatric Research , 18 ( 2 ) : 118 – 127 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.