4,272
Views
0
CrossRef citations to date
0
Altmetric
Professional Issues

Attorney demands for protected psychological test information: Is access necessary for cross examination or does it lead to misinformation? An interorganizational* position paper

, , , , , , , , , , & show all
Pages 889-906 | Received 17 Jan 2024, Accepted 20 Feb 2024, Published online: 28 Feb 2024

Abstract

Objective: Some attorneys claim that to adequately cross examine neuropsychological experts, they require direct access to protected test information, rather than having test data analyzed by retained neuropsychological experts. The objective of this paper is to critically examine whether direct access to protected test materials by attorneys is indeed necessary, appropriate, and useful to the trier-of-fact. Method: Examples are provided of the types of nonscientific misinformation that occur when attorneys, who lack adequate training in testing, attempt to independently interpret neurocognitive/psychological test data. Results: Release of protected test information to attorneys introduces inaccurate information to the trier of fact, and jeopardizes future use of tests because non-psychologists are not ethically bound to protect test content. Conclusion: The public policy underlying the right of attorneys to seek possibly relevant documents should not outweigh the damage to tests and resultant misinformation that arise when protected test information is released directly to attorneys. The solution recommended by neuropsychological/psychological organizations and test publishers is to have protected psychological test information exchanged directly and only between clinical psychologist/neuropsychologist experts.

Introduction

Role of psychological and neuropsychological testing in legal proceedings

Courts seek the truth and provide justice in an impartial forum to resolve legal disputes, under rules of procedure and evidence. United States (U.S.) courts apply those rules to manage adversarial disputes to final adjudication. Court proceedings strive to hold alleged wrongdoers accountable for actions, whether criminal, civil, administrative, or probate.

Historically, U.S. courts were skeptical about experts, finding “that opposite opinions of persons professing to be experts may be obtained to any amount.” Winnans v. N.Y. Erie Railroad Co. (1859). Over time, courts adapted to emerging scientific methodology with a growing recognition that reliable and relevant expert evidence may shed light on subjects that are outside the common understanding of most jurors. In fact, the U.S. Supreme Court compared banning expert testimony—in this case with respect to assessment of future dangerousness—as akin to an effort to “disinvent the wheel.” Barefoot v. Estelle (1983).

Currently, expert testimony in the field of psychological and neuropsychological science is routine in state and federal courts. This is reflected in court decisions and rules that embrace expert testimony (Kaufmann, Citation2013) when it is helpful to the trier of fact and offered with a known degree of accuracy (See Frye v. United States, Citation1923; Fed. R. Evid. 702, Citation2023; Daubert v. Merrill Dow Pharm., Inc., 1993). Neuropsychologists play a particularly important role as expert witnesses in cases involving claims of cognitive and psychological dysfunction/damage precisely because assessing cognitive and psychiatric injury is so removed from the experience of lay triers of fact. The use of neuropsychology experts to assist the courts in making legal decisions has grown substantially over the last decade and is now a well-established area of forensic practice and research (Sweet et al., Citation2023), related to civil, criminal, administrative and probate cases (Sweet, Klipfel, Nelson, & Moberg Citation2021).

In any forensic context, the role of the neuropsychological expert is to provide accurate and pertinent evidence to assist the trier of fact. Without objective documentation of cognitive and psychological function based on validated neuropsychological and psychological testing, courts would have to rely primarily on the subjective, and possibly biased, self-report of litigants, defendants, and claimants. This is problematic in that there is a large body of literature documenting that self-report of cognitive symptoms has little relationship to results of objective neurocognitive testing (Gardner, Langa, & Yaffe, Citation2017; Howlett et al., Citation2022; Spencer et al., Citation2010). Self-report of cognitive and psychological status, including damages, cannot be solely relied upon because it is affected by many situational and external variables (e.g. Andersson, Marklund, Walles, Hagman, Miley-Akerstedt, Citation2019; Tassoni et al., Citation2023) and is frequently not objectively accurate (Edmonds et al., Citation2014). In fact, the unreliability and inaccuracy of self-reported cognitive/psychological dysfunction prompted the fields of neuropsychology and clinical psychology to develop and validate neuropsychological and psychological tests as objective measurement tools. A statement read before the Board of Circuit Judges at its annual meeting on January 3, 1934, warned regarding self-report, “If litigants were left to their own methods [in submitting a disputed matter for the Court’s determination], utter chaos and confusion would result, for laymen are prone to exaggerate the importance of non-essentials and in the heat of battle, inflamed by the desire to win, frequently lose, in varying degrees, that strict regard for the truth, which the solemn obligation of the oath ought to assure…” (https://scholarship.law.marquette.edu/cgi/viewcontent.cgi?article=3951&context=mulr.)

Notably, neuropsychological and psychological testing provide data as to whether test-takers are performing to their actual abilities and reporting symptoms accurately, including whether they are portraying cognitive and psychological dysfunction that does not exist, or that does not exist to the degree depicted. Prior to the development and scientific validation of performance validity tests (PVTs), which verify whether test-takers are performing to actual skill levels on neurocognitive testing, deliberate misrepresentation of cognitive dysfunction was thought to be rare. However, subsequent research showed that 30% or more of personal injury litigants/workers compensation claimants are judged to be exaggerating or feigning cognitive dysfunction (Mittenberg et al., Citation2002; Stevens et al., Citation2008), and the rates of cognitive symptom misrepresentation in compensation-seeking (personal injury litigation, disability applicants, etc.) individuals who report mild traumatic brain injury is ≥ 40% (Mittenberg et al., Citation2002; Boone et al., Citation2021). Similarly, cognitive symptom exaggeration in criminal defendants is quite common, with research suggesting rates of noncredible performance as high as 54% (Ardolf, Denney, & Houston, Citation2007). In contrast, the base rate of noncredible cognitive presentations in clinical/non-forensic contexts is about 10% or less (excluding patients with external incentives, medically unexplained symptoms, and oppositional attitude toward testing; Martin & Schroeder, Citation2020; see also Mittenberg et al., Citation2002). Therefore, neuropsychological evaluations in forensic cases, both civil and criminal, in which there are typically strong external incentives to feign, are unique situations and require objective test results, including data from multiple PVTs, as well as symptom validity tests (SVTs), which verify accuracy of symptom reporting and responding on psychological/personality testing. Standard neurocognitive and psychological tests without use of PVT and SVT indices do not effectively detect noncredible presentations. For example, relying upon standard neuropsychological test findings to determine performance invalidity has been known for decades to be essentially no better than chance (Faust et al., Citation1988; Heaton et al., Citation1978), or “flipping a coin.”

Threats to psychological and neuropsychological tests in the legal arena

Psychological and neuropsychological tests provide highly relevant information to the trier of fact, but to continue to do so, the test content needs to be withheld from the general public; tests can only provide accurate information when test takers are naïve to test content, instructions, and procedures prior to exam.

Unfortunately, neuropsychologists in litigated cases are encountering more frequent demands for direct access to protected neuropsychological/psychological test information by opposing counsel, such as test data sheets filled out during the exams to document responses, audio-and video-recording of testing, and actual tests and test manuals. Attorneys argue that such information is critical to protect their clients and cross-examine opposing neuropsychological expert witnesses, and that courts should enforce their right to have unfettered access to such materials.

However, until relatively recently, and across jurisdictions, it was routine for both plaintiff and defense attorneys to agree to allow protected neuropsychological/psychological information to be exchanged only between retained neuropsychological experts, in compliance with the Ethics Code for Psychologists, and multiple position papers within neuropsychology and psychology arguing for maintenance of test security (e.g. see summary in Official position of the American Academy of Clinical Neuropsychology [AACN] on test security, Boone et al., Citation2022), as well as policies implemented by psychological test publishers (e.g. Western Psychological Services Position Statement re: Test Security, 2023). Simply put, attorneys have no formal training or expertise in analyzing test data, and therefore need to retain experts for that purpose.

Moreover, attorneys do not share the same ethical and professional responsibility to safeguard protected test materials (Victor & Abeles, Citation2004). Survey evidence of legal professionals suggests that test-takers are coached on psychological/neuropsychological tests when test information is possessed by non-psychologists who are not mandated to protect the tests. Wetter and Corrigan (Citation1995) conducted a survey in which they found that half of attorneys and a third of law students believed their clients should always or usually be informed about validity scales in psychological tests. Youngjohn (Citation1995) described a case in which a Worker’s Compensation attorney admitted on the record to the Administrative Law Judge at the Industrial Commission of Arizona that he had coached and educated his client prior to a neuropsychological exam, and Youngjohn had also been told by another attorney that it would be unethical for an attorney not to coach his client prior to a forensic neuropsychological evaluation.

Subsequently, Essig et al.’s (Citation2001) survey of members of the Association of Trial Lawyers of America revealed that 75% “typically spend up to an hour preparing their clients for neuropsychological evaluations and commonly cover test content, detection of malingering, and brain injury symptoms,” and 29% review the Minnesota Multiphasic Personality Inventory − 2 (MMPI-2) with their clients prior to examination. The authors conclude that “although only 8% appear to specifically instruct their clients how to respond to neuropsychological tests, advance knowledge of test content may be sufficient to allow the plaintiff to respond in a manner that does not reflect his or her current level of cognitive functioning and thus alter the expert’s conclusions.” More recently, Spengler et al. (Citation2020), in a survey of practicing attorneys, reported that over 50% endorsed providing clients with information regarding MMPI-2 validity scales. This finding is extremely problematic, given that meta-analytic research findings from 99 published studies have demonstrated that specific coaching related to MMPI-2 validity scales improved test-taker ability to elude detection of feigning (Aparcero, Picard, Nijdam-Jones, & Rosenfeld, Citation2021).

As an additional real world illustration of the problem of “coaching,” an April 2018 Motion for the Appointment of a Special Investigator pertaining to the National Football League Concussion Settlement stated that “fraud discovered in the Program so far is deep and widespread” (https://mdl.law.uga.edu/sites/default/files/2018.04.13%2520-%2520NFL%2527s%2520Memo%2520to%2520Appoint%2520a%2520Special%2520Investigator.pdf). It was noted that “a law firm representing more than 100 Settlement Class members coached retired players on how to answer questions during their neuropsychological evaluations” (p. 2), and “text messages and other communications reveal a disturbing pattern of a claims service provider coaching players to ‘beat’ the neuropsychological tests” (p. 3).

Victor and Abeles (Citation2004) conclude that the above examples of coaching “highlight the intense clash of ethical obligations between psychologists and attorneys in the forensic setting; it is within the attorney’s purview of responsibility to advocate for his/her client in ways that may preclude the prudent psychologist from making valid interpretations of the litigant’s behavior and functional status” (p. 374). Coaching can be expected to continue unless better procedures are in place to limit access to psychological/neuropsychological tests by non-psychologists. Withholding of protected test information from non-psychologists who are not ethically bound to protect test content is required to ensure that accurate and reliable information is provided to the trier of fact.

Recognition of the importance of test security in judicial rulings, state laws and regulations, and in other professions

The need to ensure test security has been endorsed by judicial rulings, state laws and regulations. The U.S. Supreme Court first addressed the security of psychological tests in Detroit Edison Co. v. National Labor Relations Bd. (1979) in which the “strong public policy” of test security was noted. The High Court ruled against the National Labor Relations Board (NLRB), finding that the concern for test secrecy was a reasonable as well as an indisputably important interest, and that a mere order "barring the Union from taking any action that might cause the tests to fall into the hands of employees who have taken or are likely to take them" did not adequately protect test security since the union was not a party to the litigation and would likely not be subject to contempt sanctions or other proceedings if it ignored the restriction. Others have argued that protective orders barring exposure of psychological tests can be enforced through contempt rulings or sanctions against attorneys (See Randy’s Trucking Inc. v. Sup. Ct. of Kern Cty, 2023), although, as discussed further below, the entire supposition that protective orders can currently be used to protect sensitive test information is in question.

The rationale for a privilege to protect psychological tests was first extended to a clinical case in Chiperas v. Rubin (1998), citing Edison. Kaufmann (Citation2005) elaborated on the psychologist nondisclosure privilege implied in Edison, as distinct from the psychotherapist-patient privilege (Jaffe v. Redmond, 1996), and subsequently identified a series of additional federal court and NLRB decisions that recognize that discovery of psychological tests is restricted under Edison (Kaufmann, Citation2009). Many states have enacted at least some protections for psychological (including neuropsychological) test materials and content (see Kaufmann, Citation2009). In the Iowa Administrative Code regarding psychological testing, it is specified that test data are only to be disclosed to a licensed psychologist who cannot further disseminate such information, and that licensed psychologists “shall not disclose test materials in any administrative, judicial, or legislative proceeding” (Iowa Admin. Code r. 645-243.4). See also Iowa Code Ann. § 228.9.

Maine (2013) adopted a model statute for protecting standardized psychological and neuropsychological tests from wrongful disclosure that has withstood judicial scrutiny. In Douglas v. Parkview Adventist Med. Ctr. 2017, the judge cited Maine’s (2013) “Act To Ensure the Integrity of Neuropsychological and Psychological Testing Materials and Data” Me. Rev. Stat. 22 § 1725 (2023), and conducted an in camera review of the test materials, finding “Some of the handwriting is illegible.” (p. 16). The judge noted “The documents that contain rating and scoring are meaningless to the untrained reader” (p. 16), before denying the defense request for release of raw data from psychological test protocols to non-psychologists. Douglas ruled that defense counsel provided “no reasoned and principled basis on which to order production of the documents in contravention of the statute or to reach the compromise” (p. 17) production order. No protective order was issued and no demand for patient access under HIPAA was heard.

Other professions have similar test protection concerns and zealously maintain test security. For example, the National Conference of Bar Examiners (NCBE) created a new position of Director of Test and Information Security to “further minimize the security risks that threaten to undermine the integrity of the bar admissions process” (https://thebarexaminer.org/article/spring-2018/test-and-informationsecurity-centralizing-security-initiatives-at-ncbe/). As noted, NCBE “closely safeguards the security of its exam questions. The security of the questions is important before exam day to ensure that no examinee has an unfair advantage by having gained advance knowledge of the questions…NCBE strictly prohibits copying, reproducing, or disclosing any NBE (National Bar Exam) questions or answers, whether via electronic, telephonic, written oral or other means, to any party or to any public forum during or after the exam.”

At the October 2018 Conference on Test Security (COTS), representatives of the NCBE were in attendance, as well as members of organizations involved in administration of the Law School Admission Test (LSAT), SAT, and Graduate Record Examination (GRE), and testing conducted in Kindergarten through the 12th grade. In this meeting (Albanese, Zhang, & Hill; Test Security: A Meeting of Minds, The Bar Examiner, Winter 2018–2019), it was concluded that “the importance of maintaining test security cannot be overemphasized, because cheating, regardless of which form it takes, erodes the validity of the interpretations of test scores and then undermines the legitimacy of decisions based on those scores.” The same protections required for academic and professional licensure tests should be extended to psychological and neuropsychological tests.

Managing competing interests regarding access to psychological tests by non-psychologists

The judicial system has an obligation to protect the enterprise of neuropsychological and psychological testing, given the critical information such tests provide in terms of safety to society (e.g. fitness for duty evaluations of pilots, police officer candidates, physicians and other medical personal); accuracy of data on which legal, educational, and health care systems make decisions (e.g. competency to stand trial, damages in personal injury lawsuits, academic accommodations, need for medications); and appropriate allocation of societal resources (see further description in Official position of the American Academy of Clinical Neuropsychology [AACN] on test security, Boone et al., Citation2022).

At the same time, court rules seek to ensure that attorneys have access to all information necessary to rigorously prosecute or defend their cases. This legitimate need conflicts with the need to protect the integrity of tests. Seeking compromise, judges sometimes order that protected test information be released to opposing counsel under a protective order. The assumption is that by limiting access to the information only to individuals directly involved in the case, and providing for court-ordered destruction of disseminated information at the conclusion of the case, this will insure adequate protection to the field of psychological and neuropsychological testing.

However, this is likely an inaccurate assumption, for several reasons. First, protective orders were conceived prior to the digital age in which information can be rapidly uploaded and disseminated without oversight. Second, the sheer number of protective orders required for all current demands for protected test information likely guarantees that test security will be breached because there is no mechanism to track, much less monitor, compliance with such a large volume of materials. Third, individuals incentivized to breach test security are given possession of the protected test materials (i.e. “the fox is guarding the hen house”). That is, attorneys who want their clients to appear to have credible cognitive and/or psychological dysfunction on testing, even when symptoms are being misrepresented, are motivated to breach test security to coach clients as to how to “beat the tests.” Attorneys have a clear conflict of interest on this issue; while they are obligated to follow protective orders regarding psychological testing materials, they also view it as their responsibility to educate their clients regarding psychological and neuropsychological testing. Moreover, they stand to benefit financially by not abiding by protective orders, in that they can dramatically increase the value of their cases if their clients can successfully feign brain injury over and above other claimed injuries. Fourth, test materials would not only be made available to attorneys, but potentially to all other experts, consultants, and employees of law firms in each case, thereby providing access to dozens of non-psychologists in each litigated case. The fifth and final concern is that those with the strongest interest in protecting test security (i.e. the neuropsychologists and psychologists relying on them) may lack standing to enforce an order against violators. Given the volume of cases and information disclosed, monitoring of compliance with each protective order is not only expensive, but extraordinarily difficult, especially when the parties to a case may be insufficiently motivated once a case is resolved.

Kaufmann (Citation2005) outlined the competing public policies underlying discovery and test security. For more than three centuries, common law has recognized the public right to require “every man’s evidence” as a fundamental discovery maxim. That is, no individual can refuse to give what relevant evidence they have in a legal proceeding except in very limited situations (e.g. a legally recognized privilege or where the probative value of the evidence is substantially outweighed by policy concerns such as the danger of unfair prejudice, confusing the issues, or misleading the jury; See Fed. R. Evid. 403 (Citation2023).

Even if one assumes that protected test information constitutes relevant, probative evidence, evidentiary privileges may regulate the court’s search for the truth, if the privilege “promotes sufficiently important interests to outweigh the need for probative evidence” (Jaffe v. Redmond, 1996). When issuing judicial decisions, judges are expected to weigh the relative harm/benefit to each side, as well as to balance the strength of the proprietary interest against a party’s need for the information and the extent to which the information may be otherwise available (i.e. through a retained expert). For example, judges are to avoid rulings that cause permanent and arguably catastrophic damage to one side, and at the same time only provide minimal benefit to the other side. In the context of the irreversible damage that would be sustained by the fields of neuropsychological and psychological testing if protected test information is released to non-psychologists, it is imperative to examine the claim by opposing attorneys that access to this information is, in fact, critical to the litigation of their case.

Test information that can be disclosed to attorneys to allow appropriate critique of neuropsychological/psychological tests and serve as the basis for cross-examination questions

When discussing release of neuropsychological/psychological test information to non-psychologists, it is first important to define terms. Definitions of relevant terminology are contained in . Protected test data refer to materials that contain information, that if released to non-psychologists, could jeopardize future use of tests. Protected test data include tests and test manuals, unredacted test data sheets, test administration recordings, and score reports and narrative clinical reports provided by test publishers based on test-taker test scores. Test data include protected test data, but also numerical test data, such as raw scores, scaled scores, T-scores, z-scores, and percentiles.

Table 1. Terminology pertaining to test information.

As noted in the table, numerical test data do not contain protected test information, and they, as well as psychological/neuropsychological reports, can be released to non-psychologists without concern of harm to future use of tests. Additionally, in Daubert v. Merrell Dow Pharmaceuticals, Inc., (1993) the U.S. Supreme Court suggested various factors to be considered when allowing expert scientific testimony. These “Daubert criteria” for admission of expert testimony provide a road map as to further information that can be made available to attorneys and the trier of fact regarding neuropsychological/psychological tests to allow them to evaluate the accuracy of the test information relied upon by experts:

  1. Does the technique have standardized procedures?

    • Experts can document that tests have standardized procedures as detailed in test manuals and peer-reviewed publications (while at the same time not disclosing the standardized procedures).

  2. Has the technique undergone “real world” validation?

    • Experts can provide information as to whether the technique was validated in “real world” samples versus experimental simulation studies (the latter have limited application to clinical settings).

  3. Has the technique been subjected to peer-review and publication?

    • Experts can document the peer-reviewed published status of validation studies for psychological tests.

  4. Is the error rate of the technique known?

    • Experts can provide information as to test accuracy, which can include hit rates, false positive rates, false negative rates, confidence intervals, and the extent to which a test-taker’s scores resemble those of applicable comparison groups (e.g., patients with depression or PTSD, noncredible patients).

    • Experts can provide information as to sample sizes used in the validation of a technique. That is, did validation studies involve adequate sample sizes to ensure reliability of the data collected (i.e., small sample sizes may lead to data that are not representative of the larger population).

    • Experts can provide information as to whether the testing technique is valid for use for individuals with the test-taker’s demographics, such as languages spoken and linguistic history (e.g., English as a second language status), educational level and quality of education, literacy, and gender.

  5. Is the technique in common use?

    • Experts can cite published surveys of test usage in clinical and forensic settings (e.g., LaDuke et al., Citation2018; Rabin et al., Citation2016).

With access to the above information, attorneys have a wide range of information by which to 1) judge whether test data collected and interpreted on their clients are, in fact, accurate and reliable, and 2) craft cross-examination questions. Additionally, neuropsychologists retained by opposing counsel are routinely able to examine test data sheets, available test administration recordings, and score reports and narrative clinical reports, for administration, scoring, and interpretation errors, and then convey that information to attorneys without compromising test security.

Is attorney access to protected test information necessary for cross-examination of neuropsychological/psychological experts, or does it introduce error?

Despite having the abovementioned information by which to thoroughly critique neuropsychological/psychological test data and cross-examine neuropsychological experts, attorneys may demand direct access to the tests themselves, as well as test administration and interpretation manuals, test data sheets, recordings of testing (if available), and score reports and narrative clinical reports provided by test publishers. The argument is that they must have all information necessary to rigorously prosecute or defend their cases. However, does access to such protected information actually assist attorneys in cross-examination and management of their cases?

If an attorney were to be provided access to these protected test materials, what does the attorney do with this information? Expertise in neuropsychology involves years of supervised training at doctoral and post-doctoral levels. During graduate school, the psychology trainee completes coursework in psychological assessment (including training in test administration, scoring, and interpretation), and in understanding test construction. The psychologist then receives additional education in how to administer, score, and interpret neuropsychological tests, followed by two years of full-time post-doctoral training in neuropsychology. Throughout training, the neuropsychologist learns about psychiatric, neurological, and medical conditions that can contribute to changes in cognitive and emotional functioning, and to correlate that knowledge with patterns of performance across batteries of tests. The neuropsychologist is also taught to consider sociocultural factors that can affect an examinee’s performance on tests. By the conclusion of their formal training, most neuropsychologists have tested dozens of patients, and neuropsychologists in the forensic arena have typically tested hundreds, if not thousands, of claimants, plaintiffs, and defendants. In addition, neuropsychologists will read thousands of peer-reviewed studies regarding neuropsychological testing throughout their careers, and attend hundreds of hours of continuing education coursework, thereby continuing to increase their understanding and nuanced interpretation of neuropsychological test results. In contrast, attorneys have no training/education in psychological/neuropsychological testing (and it would in fact be an ethical violation for educators to provide such training to non-psychologists).

Is the expectation that the attorney will attempt, by simply looking at test responses or referring to a test manual, to determine if tests were administered, scored, and interpreted correctly when the attorney in fact has no training and experience in doing this? The analogy would be, if attorneys are presented with brain imaging scans or EEG tracings, is it appropriate that they look to manuals to determine if the scans and tracings were read correctly by neuroradiologists and neurologists who have years of residency training after completion of a medical degree? Who will be checking the attorney’s work product for scoring and interpretation accuracy? Attorneys do not need direct access to the protected test materials when the only means of properly evaluating test findings requires consultation with a bona fide psychological/neuropsychological expert, whom they have the option of retaining without compromising test security.

Is it not likely that attorneys, when crafting cross-examination questions regarding test data, will introduce error and inaccurate information to the jury by attempting to analyze the test data themselves? The jury must inevitably weigh, “Is the attorney correct in the concerns being raised, or is the expert correct when asserting that tests were administered, scored, and interpreted correctly?” The argument could be made that the questions crafted by someone with no training/experience in neuropsychological assessment are actually prejudicial (i.e. unfairly biasing the jury against a party) rather than informative.

Illustrative examples from neuropsychological testing

Neuropsychologists administer objective, validated tests to sample a wide range of neurocognitive skills, including overall intellectual, learning/memory, attention, processing speed, visual perceptual/spatial, language, executive/problem-solving, and motor dexterity abilities, as well as PVTs (i.e. indicators verifying that test-takers have performed to true ability). A key tenet of neuropsychological testing is that the tests are not designed to provide reliable and valid interpretations at the individual item level. That is, multiple items are included in tests precisely because it is the pattern formed by, or summative contribution of, individual items that generates useful data.

In the Wechsler Adult Intelligence Scale (WAIS) family of tests, the test-taker is asked to define vocabulary items, and responses are scored as to whether they are sophisticated and complete (2-points), “in the ballpark” (1-point), or “missing the mark” (0-points). While the manual provides examples of 0, and 1- and 2-point responses, formal training in neuropsychological testing guides the neuropsychologist in understanding the distinction between score categories and in accurately selecting the correct score for each item. The individual vocabulary items are not interpreted in isolation, but rather the total score, which is then converted to a “scaled score” adjusted for age and potentially other demographic factors, such as education. Rather than relying solely on test manuals, neuropsychologists keep abreast of peer-reviewed literature that guides interpretation of the vocabulary test scores, such as consideration of the impact of English-as-a-second language status on ability to define vocabulary items in English (Razani et al., Citation2007). An individual without training in test scoring and interpretation would not likely be able to correctly score the individual vocabulary subtest items, and not be able to accurately interpret the results in the context of demographic and other relevant factors. As a result, divulging the test content (i.e. what the individual vocabulary items are, and examples of 0, 1-, and 2-point answers) serves no viable purpose and could harm future use of the test if future test-takers were informed of the words they will be asked to define, thereby allowing them to “pre-prepare” their performance.

In testing of verbal learning/memory, test-takers may be read a list of words to learn, or asked to listen to a short story and later recall details about the story. It is not performance on individual words or story details that is interpreted, but rather total scores across learning and recall trials (i.e. the total number of words or story details recalled on each test trial). In this context, divulging the actual test content (i.e. the exact word lists and stories to be recalled) again serves no viable purpose and harms future use of the test. In other words, there is no benefit to providing an attorney with actual test stimuli because scores are interpreted, not responses for individual test items, and release of this information to non-psychologists has the potential to compromise memory tests for future use. That is, if the word lists and stories are available to future test-takers, these individuals could decide beforehand how to adjust their performances. For example, pilots or physicians with mild cognitive impairment who are required to undergo fitness for duty evaluations could drill themselves in learning/recalling the information before the exam to mask actual difficulties in short-term memory that normally would be detected in individuals naïve to the tests. Conversely, if provided with actual test stimuli and information as to scores of normal and memory impaired groups, individuals intent on fabricating memory difficulties could titrate their recall to match that of memory impaired groups, thereby portraying memory deficits they do not in fact have. By attempting to match their performances to memory impaired groups, they can ensure that they lower their performance “enough” to document memory impairment, while at the same time, not performing so poorly as to be unbelievable. If memory test data are only provided to retained neuropsychologists, these genuine experts can determine if tests were administered, scored, and interpreted correctly, while still guarding actual test stimuli from exposure, thereby protecting tests for future use.

Of particular concern is divulging to attorneys the content, procedures, scoring, and interpretation methods of PVTs. PVTs are validated measures that allow the examiner to verify that a test-taker is performing to true ability on neurocognitive testing, and that neurocognitive test scores reflect actual skill level. Without PVT indicators, the examiner has limited objective information regarding the veracity of neurocognitive test performance. PVTs are mandated to be used by neuropsychologists in both clinical and forensic exams (Sweet, Heilbronner, et al., Citation2021). Failures on multiple PVTs within a neuropsychological exam signal that test-takers are not performing to true ability, and that they are portraying cognitive dysfunction that does not exist, at least at the levels they display. Excessive rates of PVT failures are frequently found in personal injury litigants (Mittenberg et al., Citation2002), disability seekers (Mittenberg et al., Citation2002), individuals seeking academic accommodations in the context of claimed ADHD (Harrison, Lee, & Suhr, Citation2021), and criminal defendants (Denney & Fazio, Citation2021). It is critical that examiners be able to identify which test-takers are performing to actual ability and which are not, given the important decisions that are made based on the results of cognitive testing in these various contexts.

PVTs are particularly vulnerable if test stimuli and methods of administration, scoring and interpretation are divulged to attorneys incentivized to advocate for their clients by coaching them on how to “pass” these measures, which can help the clients portray cognitive dysfunction that does not exist. In this situation, protective orders may not be adhered to, and even if test data are destroyed at the conclusion of a case, once attorneys have seen the actual PVTs, it is not likely that they will forget the test stimuli and instructions, and they will still be able to coach future clients. If PVT data are only provided to neuropsychological experts retained in cases, they can judge if PVTs were scored and interpreted correctly while still protecting test content, and thereby, future use of the tests.

Illustrative examples from psychological/personality testing

The most frequently administered objective personality tests are versions of the Minnesota Multiphasic Personality Inventory (MMPI; see Martin et al., Citation2015, for frequency of use data). The restructured form of the MMPI-2 (MMPI-2-RF) consists of 338 true-false items (Ben-Porath & Tellegen, 2008/Citation2011), and the more recent MMPI-3 is comprised of 335 items (Ben-Porath & Tellegen, Citation2020). The MMPI-3 contains 42 substantive scales assessing for various psychological traits and conditions, as well as 10 validity scales. Among the validity scales are five “over-reporting” scales (Symptom Validity Tests; SVTs) that evaluate as to whether test-takers are reporting psychiatric (including psychotic), physical, and cognitive/memory symptoms accurately. While a test manual is available, neuropsychologists/psychologists have the option of a score/narrative report from the publisher that provides potential diagnoses and descriptors of the test-taker. As with neuropsychological tests, a key rule of psychological testing is that the tests are not designed to be interpreted at the level of individual items; when individual items are interpreted, the measure is no longer a psychometric “test.”

Attorneys who have accessed protected materials from MMPI instruments (manuals, test items, and narrative score reports), have asked the following type of question:

Isn’t presence of ‘X’ an item on FBS (physical and/or cognitive symptom over-reporting scale)? Because my client said that he has the physical symptom ‘X,’ he gets a point on this scale that says that he is lying?

A fundamental problem with this question is that neuropsychologists do not identify “lying,” but rather whether a pattern of symptom reporting is plausible and thereby likely credible. Further, a neuropsychologist with training and experience in the use of the MMPI-3 and earlier instruments well understands that these tests are not interpreted at the individual item level, but rather by the total number of items answered in the keyed direction on the scale. The analogy would be if you go to a medical office with a true illness, and you are asked to fill out a symptom checklist as to the symptoms you are experiencing. On a 60-item checklist, patients with a specific condition might endorse 10 or 15 symptoms that reflect actual symptoms from the disease. But if 55 or 60 items were endorsed, a physician would recognize that this is not a plausible symptom report in that there is no medical condition that would be consistent with this number of symptoms. In this situation, the content of the individual items is not relevant, but rather the total number endorsed. If the MMPI-3 data were provided to an opposing neuropsychologist expert, this professional would analyze whether appropriate interpretations of scale scores were provided. The expert would not suggest that an attorney ask questions regarding the content of individual items endorsed by the test-taker, because this does not comply with the standard interpretive guidelines for the test.

By asking questions regarding answers to individual items, the attorney is constructing a straw man, and introducing error and misinformation through insinuating to the jury that the neuropsychologist should have interpreted answers to individual items. Such questions are inconsistent with the Daubert criteria for admissibility of testimony, because interpretation of answers to individual items does not reflect a standardized procedure, is not in common use, has not been validated in real world samples, has no peer-reviewed method for individual item analysis, and item-level analysis does not have a known error rate. Regarding the latter, it can reasonably be expected that item level interpretation would have a substantial error rate because the test was not developed and validated to be interpreted at the individual item level.

As an additional example, attorneys who have accessed manuals from MMPI instruments have asked:

With the score of 86T on the RBS (memory symptom over-report scale), what does the manual say about how to interpret that? Doesn’t it say that there are three possibilities as to why my client got that high score? First, that he didn’t adequately understand or comprehend the items, correct? Second, that he has psychiatric conditions that could account for that high score, correct? Third, that he was over-reporting. You were given three choices by the test manual, but you didn’t tell the jury that, did you? You told them that my client was over-reporting memory symptoms, exaggerating, when there were two other possibilities?

But a psychologist with training and experience in the use of the MMPI-3 and earlier instruments knows that when the test manual provides three possible score interpretations, the examiner is expected to carefully evaluate the three options to determine which is the most accurate and likely. Examiners are not expected to simply list the three options in a report as if they are all equally plausible. An analogy would be, if a chest x-ray shows an abnormality that could represent artifact (imaging error), pneumonia, or cancer, a physician is expected to obtain further information to determine which of the three is accurate, not just list the three options in the chart note.

By looking at the first three validity scales on the MMPI-3 (VRIN, TRIN, and CRIN), an evaluator can quickly rule in or rule out failure to understand/comprehend test items. Those scales are specifically designed to check whether the test-taker had difficulty understanding the items, was responding randomly, was over selecting true or false answers, etc. If normal range scores on these scales are obtained, it means that the test-taker was responding appropriately to the content of the items. Regarding the next option, that a true psychiatric disturbance accounts for the RBS T-score of 86, the examiner can rule-in or rule-out this option by comparing the test-taker’s score against that of groups of individuals with psychiatric conditions (such as data provided by the test publisher on patients in outpatient psychiatric treatment). If the test-taker’s score is substantially elevated compared to that of psychiatric comparison groups, this would indicate that it is unlikely that an actual psychiatric condition caused the elevated score on this over-report scale. Additionally, the examiner can compare the test-taker’s score against scores for credible psychiatric patients seeking disability compensation versus disability seekers determined to be noncredible (e.g. Tylicki et al., Citation2022). If the test-taker’s score is substantially higher than that of credible psychiatric disability seekers and more comparable to that of noncredible disability seekers, that would point to noncredible symptom over-reporting by the test-taker.

By asking why the expert did not provide all three options for the elevation on the over-reporting scale T-score, the opposing attorney is arguably introducing error and misinformation by suggesting that the expert was derelict and/or biased in reporting only one of the three options. The expert is expected to conduct additional analysis to determine which of the three possibilities is most likely, and then to report that option; this is the core of their expertise that is valuable to the finder of fact.

Summary and conclusion

The above examples serve to demonstrate that there is no reasonable benefit to providing protected test data to attorneys, and the resulting damage to neuropsychological/psychological tests, public health and safety, and the judicial process itself is substantial. The public policy underlying the right of attorneys to seek possibly relevant documents does not outweigh the damage caused by release of protected test information directly to attorneys. The solution recommended by neuropsychological and psychological organizations, and by test publishers, is to have protected psychological test information exchanged directly and only between retained licensed psychologist experts. Having retained experts receive the test data, rather than attorneys, allows for an accurate critique of the test data and resultant expert opinions without introducing misinformation to the jury, and protects the tests for future use. With direct attorney access to neuropsychological/psychological tests, accurate knowledge is not provided to the trier of fact; that is, the jury is not helped to understand the test results, but instead the information conveyed “muddies the waters,” and is prejudicial in that it has the potential to confuse the issues and mislead the jury. Ultimately, what is undermined and excluded from the judicial proceedings is “science.” When psychological tests are not protected for future use, and when attorneys ask “red herring” questions that misrepresent test data in a manner not intended by test developers and peer-reviewed literature, what is being communicated is that there is no place for science-based information in legal proceedings. It is nonsensical that Daubert criteria are used to ensure the scientific integrity of expert testimony, while at the same time untrained attorneys are allowed to introduce inaccurate interpretations of test findings to juries.

Finally, when attorneys demand access to protected test information, a very likely outcome is that neuropsychologists and psychologists will refuse to comply in the interest of maintaining test security, and will either refuse to be retained or withdraw from cases, rather than jeopardize the tools of our profession. For example, in a survey of 77 board certified neuropsychologists in California conducted in 2020, 96% to 98% of them indicated that protected test data are only to be shared between licensed psychologists, and that attorneys do not have education, training, experience, or ethical obligations to properly analyze, interpret, or maintain the security of raw test data (Declaration by Drs. Henry and Lechuga, 2023). When psychologists withdraw from cases, it precludes the trier of fact from being provided with objective, appropriately interpreted test findings, which are arguably important for accurate and fair jury decisions. It also prevents defendants, plaintiffs, and claimants from obtaining the mental examinations to which they are entitled as a matter of law.

If neuropsychological and psychological tests are not protected, neuropsychological science will no longer be available to assist the trier of fact in an area that is far removed from their experience. Without objective neuropsychological/psychological tests, including performance and symptom validity measures, the judicial system will be “going backward in time” (i.e. “dis-inventing the wheel” per the U.S. Supreme court) in terms of accuracy of information provided to courts.

Acknowledgments

We greatly value the comments to the manuscript provided by Yossef Ben-Porath, Ph.D. The authors also thank the AACN Board of Directors and the AACN Publications Committee for their review and suggestions regarding this article.

Disclosure statement

The authors are forensic experts in neuropsychology and psychology, and/or attorneys.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

References

  • Andersson, C., Marklund, K., Walles, H., Hagman, G., & Miley-Akerstedt, A. (2019). Lifestyle factors and subjective cognitive impairment in patients seeking help at a memory disorder clinic: The role of negative life events. Dementia and Geriatric Cognitive Disorders, 48(3–4), 196–206. https://doi.org/10.1159/000505573
  • Aparcero, M., Picard, E. H., Nijdam-Jones, A., & Rosenfeld, B. (2021). The impact of coaching on feigned psychiatric and medical symptoms: A meta-analysis using the MMPI-2. Psychological Assessment, 33(8), 729–745. https://doi.org/10.1037/pas0001016
  • Ardolf, B. R., Denney, R. L., & Houston, C. M. (2007). Base rates of negative response bias and malingered neurocognitive dysfunction among criminal defendents referred for neuropsychological evaluation. The Clinical Neuropsychologist, 21(6), 899–916. https://doi.org/10.1080/13825580600966391
  • Barefoot v. Estelle, 463 U.S. 880, 896 (1983).
  • Ben-Porath, Y. S., & Tellegen, A. (2008/2011). MMPI-2-RF (Minnesota Multiphasic Personality Inventory–2 Restructured Form): Manual for administration, scoring, and interpretation. University of Minnesota Press.
  • Ben-Porath, Y. S., & Tellegen, A. (2020). MMPI-3 (Minnesota Multiphasic Personality Inventory–3): Manual for Administration, Scoring, and Interpretation.
  • Boone, K. B., Litvin, P., & Victor, T. L. (2021). Base rates of feigned mild traumatic brain injury. In Boone, K. B. (Ed.), Assessment of feigned cognitive impairment. Guilford Publications.
  • Boone, K. B., Sweet, J. J., Byrd, D. A., Denney, R. L., Hanks, R. A., Kaufmann, P. M., Kirkwood, M. W., Larrabee, G. J., Marcopulos, B. A., Morgan, J. E., Paltzer, J. Y., Rivera Mindt, M., Schroeder, R. W., Sim, A. H., & Suhr, J. A. (2022). Official position of the American Academy of Clinical Neuropsychology on test security. The Clinical Neuropsychologist, 36(3), 523–545. https://doi.org/10.1080/13854046.2021.2022214
  • Chiperas v. Rubin, 1998 U.S. Dist. LEXIS 23578 (D.C. Cir., 1998).
  • Daubert v. Merrill Dow Pharm., Inc., 509 U.S. 589 (1993).
  • Denney, R. L., & Fazio, R. L. (2021). Assessment of feigned cognitive impairment in criminal forensic neuropsychological settings. In Boone, K. B. (Ed.). Assessment of feigned cognitive impairment. Guilford Publications.
  • Detroit Edison Co. v. NLRB, 440 U.S. 301, (U.S (1979).
  • Douglas v. (2017). Parkview Adventist Med. Ctr. 2017 Me. Super. LEXIS 96, (Me. Sup. Ct
  • Edmonds, E. C., Delano-Wood, L., Galasko, D. R., Salmon, D. P., & Bondi, M. W. (2014). Subjective cognitive complaints contribute to misdiagnosis of mild cognitive impairment. Journal of the International Neuropsychological Society: JINS, 20(8), 836–847. https://doi.org/10.1017/S135561771400068X
  • Essig, S. M., Mittenberg, W., Petersen, R. S., Strauman, S., & Cooper, J. T. (2001). Practices in forensic neuropsychology: Perspectives of neuropsychologists and trial attorneys. Archives of Clinical Neuropsychology, 16(3), 271–291. https://doi.org/10.1093/arclin/16.3.271
  • Faust, D., Hart, K. J., Guilmette, T. J., & Arkes, H. R. (1988). Neuropsychologists’ capacity to detect adolescent malingerers. Professional Psychology: Research and Practice, 19(5), 508–515. https://doi.org/10.1037/0735-7028.19.5.508
  • Fed. R. Evid. (2023). 702, 403.
  • Frye v. United States, 509 U.S. 579 (D.C. Cir (1923).
  • Gardner, R. C., Langa, K. M., & Yaffe, K. (2017). Subjective and objective cognitive function among older adults with a history of traumatic brain injury: A population-based cohort study. PLoS Medicine, 14(3), e1002246. https://doi.org/10.1371/journal.pmed.1002246
  • Harrison, A. G., Lee, G. J., & Suhr, J. A. (2021). Use of performance validity tests and symptom validity tests in assessment of specific learning disorders and attention-deficit/hyperactivity disorder. In Boone, K. B. (Ed.), Assessment of feigned cognitive impairment. Guilford Publications.
  • Heaton, R. K., Smith, H. H., Lehman, R. A., & Vogt, A. T. (1978). Prospects for faking believable deficits on neuropsychological testing. Journal of Consulting and Clinical Psychology, 46(5), 892–900. https://doi.org/10.1037//0022-006x.46.5.892
  • Howlett, C. A., Wewege, M. A., Berryman, C., Oldach, A., Jennings, E., Moore, E., Karran, E. L., Szeto, K., Pronk, L., Miles, S., & Moseley, G. L. (2022). Back to the drawing board—The relationship between self-report and neuropsychological tests of cognitive flexibility in clinical cohorts: A systematic review and meta-analysis. Neuropsychology, 36(5), 347–372. https://doi.org/10.1037/neu0000796IowaAdmin. Code r. 645-243.4 (2023).
  • Iowa Code Ann. § 228.9 (2023).
  • Jaffee v. Redmond, 518 U.S. 1 (1996).
  • Kaufmann, P. M. (2005). Protecting the objectivity, fairness, and integrity of neuropsychological evaluations in litigation: A privilege second to none? The Journal of Legal Medicine, 26(1), 95–131. https://doi.org/10.1080/01947640590918007
  • Kaufmann, P. M. (2009). Protecting raw data and psychological tests from wrongful disclosure: A primer on the law and other persuasive strategies. The Clinical Neuropsychologist, 23(7), 1130–1159. https://doi.org/10.1080/13854040903107809
  • Kaufmann, P. M. (2013). Neuropsychologist experts and neurolaw: Cases, controversies, and admissibility challenges. Behavioral Sciences & the Law, 31(6), 739–755. Special Issue: Traumatic Brain Injury. https://doi.org/10.1002/bsl.2085
  • LaDuke, C., Barr, W., Brodale, D. L., & Rabin, L. A. (2018). Toward generally accepted forensic assessment practices among clinical neuropsychologists: A survey of professional practice and common test use. The Clinical Neuropsychologist, 32(1), 145–164. https://doi.org/10.1080/13854046.2017.1346711
  • Fry, L., Logemann, A., Waldron, E., Holker, E., Porter, J., Eskridge, C., Naini, S., Basso, M. R., Taylor, S. E., Melnik, T., & Whiteside, D. M. (2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17(3), 1–21.
  • Martin, P. K., & Schroeder, R. W. (2020). Base rates of invalid test performance across clinical non-forensic contexts and settings. Archives of Clinical Neuropsychology: The Official Journal of the National Academy of Neuropsychologists, 35(6), 717–725. https://doi.org/10.1093/arclin/acaa017
  • Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015). Neuropsychologists’ validity testing beliefs and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29(6), 741–776. https://doi.org/10.1080/13854046.2015.1087597
  • Me. Rev. Stat. 22 § 1725 (2023).
  • Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base rates of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology, 24(8), 1094–1102. https://doi.org/10.1076/jcen.24.8.1094.8379
  • Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical neuropsychologists in the United States and Canada over a 10-year period: A follow-up survey of INS and NAN members. Archives of Clinical Neuropsychology: The Official Journal of the National Academy of Neuropsychologists, 31(3), 206–230. https://doi.org/10.1093/arclin/acw007
  • Razani, J., Murcia, G., Tabares, J., & Wong, J. (2007). The effects of culture on WASI test performance in ethnically diverse individuals. The Clinical Neuropsychologist, 21(5), 776–788. https://doi.org/10.1080/13854040701437481
  • Randy’s Trucking Inc. v. Sup. Ct. of Kern Cty., 91 Cal. App.5th 818, (Cal. Ct. App (2012).
  • Spencer, R. J., Drag, L. L., Walker, S. J., & Bieliauskas, L. A. (2010). Self-reported cognitive symptoms following mild traumatic brain injury are poorly associated with neuropsychological performance in OIF/OEF veterans. Journal of Rehabilitation Research and Development, 47(6), 521–530. https://doi.org/10.1682/jrrd.2009.11.0181
  • Spengler, P. M., Walters, N. T., Bryan, E., & Millspaugh, B. S. (2020). Attorneys’ attitudes toward coaching forensic clients on the MMPI–2: Replication and extension of attorney survey by Wetter and Corrigan (1995). Journal of Personality Assessment, 102(1), 56–65. https://doi.org/10.1080/00223891.2018.1501568
  • Stevens, A., Friedel, E., Mehren, G., & Merten, T. (2008). Malingering and uncooperativeness in psychiatric and psychological assessment: Prevalence and effects in a German sample of claimants. Psychiatry Research, 157(1-3), 191–200. https://doi.org/10.1016/j.psychres.2007.01.003
  • Sweet, J. J., Boone, K. B., Denney, R. L., Hebben, N., Marcopulos, B. A., Morgan, J. E., Nelson, N. W., & Westerveld, M. (2023). Forensic neuropsychology: History and current status. The Clinical Neuropsychologist, 37(3), 459–474. https://doi.org/10.1080/13854046.2022.2078740
  • Sweet, J. J., Heilbronner, R. L., Morgan, J. E., Larrabee, G. J., Rohling, M. L., Boone, K. B., Kirkwood, M. W., Schroeder, R. W., … & Suhr, J. A. (2021). American Academy of Clinical Neuropsychology (AACN) 2021 consensus statement on validity assessment: Update of the 2009 AACN consensus conference statement on neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 35(6), 1053–1106. https://doi.org/10.1080/13854046.2021.1896036
  • Sweet, J. J., Klipfel, K. M., Nelson, N. W., & Moberg, P. J. (2021). Professional practices, beliefs, and incomes of U.S. neuropsychologists: The AACN, NAN, SCN 2020 practice and ‘Salary Survey. The Clinical Neuropsychologist, 35(1), 7–80. https://doi.org/10.1080/13854046.2020.1849803
  • Tassoni, M. B., Drabick, D. A. G., & Giovannetti, T. (2023). The frequency of self-reported memory failures is influenced by everyday context across the lifespan: Implications for neuropsychology research and clinical practice. The Clinical Neuropsychologist, 37(6), 1115–1135. https://doi.org/10.1080/13854046.2022.2112297
  • Tylicki, J. L., Gervais, R. O., & Ben-Porath, Y. S. (2022). Examination of the MMPI-3 over-reporting scales in a forensic disability sample. The Clinical Neuropsychologist, 36(7), 1878–1901. https://doi.org/10.1080/13854046.2020.1856414
  • Tzotzoli, P. (2012). A guide to neuropsychological report writing. Health, 04(10), 821–823. https://doi.org/10.4236/health.2012.410126
  • Victor, T. L., & Abeles, N. (2004). Coaching clients to take psychological and neuropsychological tests: A clash of ethical obligations. Professional Psychology: Research and Practice, 35(4), 373–379. https://doi.org/10.1037/0735-7028.35.4.373
  • Western Psychological Services. (2023). WPS Test Security Position Statement: https://ecom-cdn.wpspublish.com/prod/media/content-wps/WPS-Position-Statement-re-Test%20Security-2022-11-04-DSHsigned.pdf
  • Wetter, M. W., & Corrigan, S. K. (1995). Providing information to clients about psychological tests: A survey of attorneys’ and law students’ attitudes. Professional Psychology: Research and Practice, 26(5), 474–477. https://doi.org/10.1037/0735-7028.26.5.474
  • Winnans v. N.Y. & Erie Railroad Co., 62 U.S. 88 (U.S (1859).
  • Youngjohn, J. R. (1995). Confirmed attorney coaching prior to neuropsychological evaluation. Assessment, 2(3), 279–283. https://doi.org/10.1177/1073191195002003007

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.