338
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Innovations in performance and symptom validity testing: introduction to symptom validity section of the special issue

&

The first part of this special issue of the Journal of Clinical and Experimental Neuropsychology focused on innovations in performance validity testing (PVT), an area of rapid growth and clinical necessity. In addition to PVT research, the related area of symptom validity testing (SVT) research has grown significantly in recent years. Thus, as a companion to the first part of this special issue, this section focuses on innovations in SVT research.

Historical context for symptom validity tests

There is a long history of SVTs in psychology that predates the development of PVTs. The most widely known are the SVTs developed for the original Minnesota Multiphasic Personality Inventory (MMPI; Hathaway & McKinley, Citation1942) in the 1930s and 1940s. SVTs were originally designed to evaluate exaggeration/minimization in psychiatric contexts (Colligen, Citation1985). For example, malingering psychosis was a consideration in the development and refinement of SVTs. Today, SVTs are beneficial in addressing the response style of participants in clinical and forensic settings, and are commonly used by neuropsychologists, clinical psychologists, and forensic psychologists conducting evaluations although with a somewhat different focus.

While the MMPI was the first broad band personality measure to develop PVTs, subsequent measures of personality included SVTs. For example, the Personality Assessment Inventory (PAI; Morey, Citation2007) and the Millon family of scales such as the Millon Clinical Multiaxial Inventory-IV (MCMI-IV; Millon et al., Citation2019) were developed with embedded SVTs. Moreover, successors to the MMPI, (i.e., the MMPI-2-RF and MMPI-3), implemented new SVTs. Thus, most SVTs have been developed within broad-based personality measures rather than as standalone measures of response bias.

Types of symptom validity tests

To provide further context for the papers included in this special issue, a brief discussion of the types of SVTs may be useful. Classically, there are two broad categories of SVTs: content and non-content SVTs. Non-content SVTs assess for response patterns with test takers who do not consider the content of the items. Hall and Ben-Porath (Citation2022) subdivided non-content responses into subtypes including: 1. nonresponding, when test takers simply do not answer items, 2. random responding, when test takers respond randomly to items, and 3. fixed responding, when the test takers respond in the same way regardless of item content. Examples of non-content SVTs from the MMPI family of scales include the TRIN and VRIN, while the PAI non-content SVTs include ICN and INF. In contrast, content-based SVTs assess for test takers who are distorting the true nature of their condition. This type of SVT can be subdivided into overreporting measures to assess if test takers are endorsing more distress or dysfunction than they are actually experiencing, and under-reporting, which assesses if test takers are minimizing or denying psychological issues that are actually present (Hall & Ben-Porath, Citation2022). K and L scales on the MMPI and PIM on the PAI are examples of underreporting SVTs, while the F family of scales on the MMPI and NIM on the PAI are examples of overreporting SVTs. More recently, SVTs have been developed focused more on exaggeration or embellishment of cognitive and physical symptoms, rather than traditional psychiatric symptoms (e.g., FBS/FBS-r for the MMPI-2/2-RF, Ben-Porath & Tellegen, Citation2008; Cognitive Bias Scale for the PAI; Gaasedelen et al., Citation2019), which are more typically related to neuropsychological evaluations (Sherman et al., Citation2020).

Symptom validity-performance validity distinction

Although both SVTs and PVTs have been used in neuropsychological evaluations for many decades, a clear distinction between them was not well articulated until Larrabee (Citation2012). The distinction that Larrabee (Citation2012) articulated was that PVTs assess the credibility of task performance on objective cognitive measures while SVTs assess the accuracy of symptom complaints, so more of a subjective focus on the measures. Prior to that, it was common to refer to cognitively based measure of performance validity as SVTs. This has been an effective conceptualization for the field because the PVT-SVT distinction has been helpful in the development of research in both areas.

Introduction to the special issue section on symptom validity testing: Current research

Of course, with any new development, questions arise that guide and direct research. For example, the articulation of the PVT-SVT distinction has raised the question of what the overlap is and/or the difference between PVTs and SVTs in neuropsychological assessment. The extent of the relationship between PVTs and SVTs continues to be studied (Gervais et al., Citation2007; Leib et al., Citation2022) including the question whether SVTs serve as proxies for PVT performance. A study in our special issue addresses this question. Using the PAI, Boress, et al. (Citation2023) examined this relationship between SVTs and PVTs. Boress et al. found that the relationship is complex and actually varies in different clinical samples. Thus, PVTs and SVTs may be thought of as siblings: clearly related but also distinct individuals with complex inter-relationships.

Thus, there is a need for more research on the interplay between SVTs and PVTs. Recent research has developed scales to attempt to predict PVT performance and cognitive symptom exaggeration such as the Cognitive Bias Scale on the PAI (CBS; Gaasedelen et al., Citation2019) and the Response Bias Scale for the MMPI versions (Gervais et al., Citation2007). Another line of research focuses on SVTs in measures designed to assess specific diagnoses like attention deficit-hyperactivity disorder (ADHD; Suhr & Berry, Citation2017) and posttraumatic stress disorder (PTSD; Blevins et al., Citation2015). To facilitate the growth of research in these and other areas, the papers in this issue address a range of SVT topics, including the development of new SVTs. In this regard, because many cases of PTSD are associated with external incentives, such as motor vehicle accident or civilian or military disability claims, examining symptom validity is critical. For further research in this area, in this issue Schroeder et al. (Citation2024) studied the PTSD Checklist for the Diagnostic and Statistical Manual-5 (DSM-5; PCL-5) validity indices in a PTSD sample and found initial support for the SVTs in the PCL-5. For symptom validity assessment in ADHD, Finley et al. (Citation2023) conducted a cross-validity study of the validity scales on the Clinical Assessment of Attention Deficit-Adult (CAT-A). They found that the Negative Impression (NI) scale had the highest classification accuracy in discriminating over reporting of ADHD symptoms in participants and the Positive Impression (PI) scale had acceptable classification accuracy for underreporting of ADHD symptoms, but lower than is described in the manual.

Another area in need of further research with SVTs is in forensic settings, including criminal forensic evaluations. For this issue, Denney et al. (Citation2024) addressed this area with a study on the development of a new stand-alone measure of malingering specifically for criminal forensics. Denny et al. found their new measure has promising psychometric properties which may be beneficial to psychologists and neuropsychologists practicing in this complex and challenging area.

The MMPI family of tests continue to rank among the most often-used tests in neuropsychological practice. This special issue includes three papers that offer innovation concerning its SVTs. Notably, the MMPI is frequently used in active duty military and veteran populations, creating a keen need for relevant research. Toward this end, Ingram, Armistead-Jehle et al. (Citation2024) evaluated the utility of the Response Bias Scale (RBS) and the short RBS (RBS-19) in an active duty sample. They found that the scales possess satisfactory utility in identifying servicemembers who exaggerated complaints of malaise.

As with PVTs, novel and innovative SVT methodologies are needed to ensure the ongoing utility of these measures when the risks to test security rise with the growth of social media. For this issue, Ingram, Ingram, Keen, et al. (Citation2024) developed a new SVT for the MMPI-2-RF and MMPI-3 comprised of a scale of scales to predict PVT performance which is similar to recent research on a similar type of scale in the Personality Assessment Inventory (Boress et al., Citation2022). They found psychometric support for such a measure, suggesting that this is a promising avenue of research. In a related paper, Ingram, Armistead-Jehle, et al. (2024) examined the Response Bias Scale (RBS) and the RBS-19 in the new MMPI-3 and found these scales had similar classification accuracy to the RBS in earlier versions of the MMPI.

Finally, most of the research mentioned here has been conducted with participants and patients during in-person evaluation. However, since the COVID-19 pandemic, telehealth service has grown considerable, including in neuropsychology (Watson et al., Citation2023). Given the changing landscape of neuropsychological evaluation, research on the psychometrics of SVTs in telehealth contexts is critical. Shura et al. (Citation2024) contributed to this research by examining MMPI-RF SVTs in telehealth evaluations. They found that SVT measures performed similarly whether in telehealth evaluations or traditional in-person evaluations, contributing to the constructed validity of SVTs.

Recommendations on the use of SVTs

Recent consensus statements on PVT/SVT measures (Sweet et al., Citation2021) have tended to focus more on the use of PVTs in assessment. However, the use of SVTs continues to be important in many assessment contexts, and the literature presented here provides further insights on the applications of SVTs in neuropsychological assessment. Ideally, SVTs would be useful in all assessment contexts, but at times there are limitations in time and in some situations, SVTs may not be as critical. However, further research on this question is needed. Nonetheless, some recommendations on the use of SVTs would likely be beneficial. Based on the papers in this special issue and consensus statements (Guilmette et al., Citation2020; Sherman et al., Citation2020) the following recommendations may be helpful to practicing neuropsychologists:

  1. SVTs should be employed when complaints of emotional, cognitive, or physical symptoms are salient. SVTs are particularly recommended when psychiatric issues are prominent in the referral question and/or patient presentation. Other contexts include medico-legal cases where examinees may embellish or exaggerate reports of their physical malfunctioning even if they are not specifically embellishing psychiatric complaints per se.

  2. Use of SVTs should be given consideration during the assessment of younger patients who present with vague cognitive, physical, or emotional complaints. While some dementias occur in this age group, they are less common than in individuals who have reached the seventh decade of life. In younger patients, the base rate of neurologically based cognitive dysfunction may be low, and a thorough differential diagnostic assessment requires careful evaluation of emotional functioning. The use of broad band personality measures with embedded SVTs will likely be particularly helpful here.

  3. In contrast, not all patients referred for neuropsychological evaluations present with an extensive psychiatric history, so some may not need as extensive a psychological evaluation as others. One example of this is older adults seen for concerns with dementia and with no known external incentive. Depending upon the severity of cognitive deficits, such patients may not be able to complete broad band measures that include SVTs. The use of SVTs in these populations may be considered less critical in addressing the referral questions. Regardless, neuropsychologists are advised to carefully consider contextual, cultural, and demographic variables when deciding whether to include measures with SVTs.

  4. As with PVTs, the use of SVTs is warranted whenever external incentives are known or suspected, regardless of the type of setting (clinical versus medico-legal). Using SVTs even when the patient denies external incentive is likely to be helpful since patients are not always honest about such issues.

  5. Of course, whenever a neuropsychologist conducts an evaluation in any medico-legal setting, the use of SVTs is strongly recommended. The combined use of SVTs and PVTs here is indicated since previous research has shown that while there is some overlap in the constructs measured by SVTs and PVTs, there is also considerable unique variance accounted for by each type of measure which varies by population (Boress et al., Citation2023).

Future directions of symptom validity research

There are a number of directions for further research into SVT. One important area for future research is validating SVTs in specific populations associated with external incentives and validity test failures. For example, little research has been conducted on the incremental validity of SVTs over PVTs when assessing patients presenting with ADHD-related complaints (Shura et al., Citation2017; White et al., Citation2020). External incentives are prominent in this population, and they include academic/work accommodations and access to stimulant medications.

Another area ripe for innovation is refining existing SVTs. For instance, as demonstrated in this special issue, the development of a scale of scales combines existing SVT items in a novel manner, achieving incremental validity over the scales from which they originate. Further efforts to improve existing SVTs would be beneficial for advancing the field.

A third area of focus is the development of stand-alone SVTs. Most SVTs are embedded within lengthy personality tests such as the MMPI or PAI. In clinical practice, time is at a premium, and the emphasis is typically upon the assessment of neurocognitive function. When emotional distress is assessed, time often permits administration of brief self-report measures only. Development of brief standalone SVTs that assess improbable complaints of dysphoria or malaise would be a boon to clinicians.

In conclusion, the papers in this special issue address many questions related to SVTs and PVTs, but this is a broad and expanding area of active research. As these measures become better known to plaintiffs, attorneys, and individuals with external incentives, continuing research to develop innovative PVT and SVT methodology will be an ongoing need for the profession. It is hoped that the research published here will help generate future innovative research to improve the accurate assessment of both response sets and cognitive test validity.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References

  • Ben-Porath, Y. S., & Tellegen, A. (2008). Minnesota multiphasic personality inventory-2 restructured form (MMPI-2-RF): Manual for administration, scoring, and interpretation. University of Minnesota Press.
  • Blevins, C. A., Weathers, F. W., Davis, M. T., Witte, T. K., & Domino, J. L. (2015). The posttraumatic stress disorder checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of Traumatic Stress, 28(6), 489–498. https://doi.org/10.1002/jts.22059
  • Boress, K., Gaasedelen, O. J., Croghan, A., King Johnson, M., Caraher, K., Basso, M. R., & Whiteside, D. M. (2022). Validation of the personality assessment inventory (PAI) scale of scales in a mixed clinical sample. The Clinical Neuropsychologist, 36(7), 1844–1859. https://doi.org/10.1080/13854046.2021.1900400
  • Boress, K., Gaasedelen, O., Kim, J. H., Basso, M. R., & Whiteside, D. M. (2023). Examination of the relationship between symptom and performance validity measures across referral subtypes. Journal of Clinical and Experimental Neuropsychology, 1–10. Advance online publication. https://doi.org/10.1080/13803395.2023.2261633
  • Colligen, R. C. (1985). History and development of the MMPI. Psychiatric Annals, 15(9), 524–533. https://doi.org/10.3928/0048-5713-19850901-05
  • Denney, R. L., Thinda, S., Finn, P. M., Fazio, R. L., Chen, M. J., & Walsh, M. R. (2024). Development of a measure for assessing malingered incompetency in criminal proceedings: Denney competency related test (D-CRT). Journal of Clinical and Experimental Neuropsychology, 1–17. https://doi.org/10.1080/13803395.2024.2314731
  • Finley, J. A., Cerny, B. M., Brooks, J. M., Obolsky, M. A., Haneda, A., Ovsiew, G. P., Ulrich, D. M., Resch, Z. J., & Soble, J. R. (2023). Cross-validating the clinical assessment of attention deficit-adult symptom validity scales for assessment of attention deficit/hyperactivity disorder in adults. Journal of Clinical and Experimental Neuropsychology, 1–13. Advance online publication. https://doi.org/10.1080/13803395.2023.2283940
  • Gaasedelen, O. J., Whiteside, D. M., Altmaier, E., Welch, C., & Basso, M. R. (2019). The construction and the initial validation of the cognitive bias scale for the personality assessment inventory. The Clinical Neuropsychologist, 33(8), 1467–1484. https://doi.org/10.1080/13854046.2019.1612947
  • Gervais, R. O., Ben-Porath, Y. S., Wygant, D. B., & Green, P. (2007). Development and validation of a response bias scale (RBS) for the MMPI-2. Assessment, 14(2), 196–208. https://doi.org/10.1177/1073191106295861
  • Guilmette, T. J., Sweet, J. J., Hebben, N., Koltai, D., Mahone, M. E., Spiegler, B. J., Stucky, K., Westerveld, M., & Conference Participants. (2020). American Academy of Clinical Neuropsychology Consensus Conference Statement on Uniform Labeling of Performance Test Scores. The Clinical Neuropsychologist, 34(3), 437–453. https://doi.org/10.1080/13854046.2020.1722244
  • Hall, J. T., & Ben-Porath, Y. S. (2022). The MMPI-2-RF validity scales: An overview of research and applications. In R. W. Schroeder & P. K. Martin (Eds.), Validity assessment in clinical neuropsychological practice: Evaluating and managing noncredible performance (pp. 150–178). The Guildford Press.
  • Hathaway, S. R., & McKinley, J. C. (1942). Manual for the Minnesota multiphasic personality inventory. University of Minnesota Press.
  • Ingram, P. B., Armistead Jehle, P., Childers, L. G., & Herring, T. T. (2024). Cross validation of the response bias scale and the response bias scale-19 in active-duty personnel: Use on the MMPI-2-RF and MMPI-3. Journal of Clinical and Experimental Neuropsychology, 1–11. https://doi.org/10.1080/13803395.2024.2330727
  • Ingram, P. B., Keen, M. A., Greene, T. E., Morris, C., & Armistead Jehle, P. J. (2024). Development and initial validation of the scale of scales (SOS) overreporting scores for the MMPI family of instruments. Journal of Clinical and Experimental Neuropsychology, 1–16. https://doi.org/10.1080/13803395.2024.2320453
  • Larrabee, G. J. (2012). Performance validity and symptom validity in neuropsychological assessment. Journal of the International Neuropsychological Society, 18(4), 625–630. https://doi.org/10.1017/S1355617712000240
  • Leib, S. I., Schieszler-Ockrassa, C., White, D. J., Gallagher, V. T., Carter, D. A., Basurto, K. S., Ovsiew, G. P., Resch, Z. J., Jennette, K. J., & Soble, J. R. (2022). Concordance between the Minnesota multiphasic personality inventory-2-restructured form (MMPI-2-RF) and clinical assessment of attention deficit-adult (CAT-A) over-reporting validity scales for detecting invalid ADHD symptom reporting. Applied Neuropsychology: Adult, 29(6), 1522–1529. https://doi.org/10.1080/23279095.2021.1894150
  • Millon, T., Grossman, S., & Millon, C. (2019). Millon clinical multiaxial inventory-IV (MCMI-IV) manual. Pearson, Inc.
  • Morey, L. C. (2007). The personality assessment inventory professional manual (2nd ed.). Psychological Assessment Resources.
  • Schroeder, R. W., & Bieu, R. K. (2024). Exploration of PCL-5 symptom validity indices for detection of exaggerated and feigned PTSD. Journal of Clinical and Experimental Neuropsychology, 1–10. Advance online publication. https://doi.org/10.1080/13803395.2024.2314728
  • Sherman, E. M. S., Slick, D. J., & Iverson, G. L. (2020). Multidimensional malingering criteria for neuropsychological assessment: A 20-year update of the malingered neuropsychological dysfunction criteria. Archives of Clinical Neuropsychology, 35(6), 735–764. https://doi.org/10.1093/arclin/acaa019
  • Shura, R. D., Denning, J. H., Miskey, H. M., & Rowland, J. A. (2017). Symptom and performance validity with veterans assessed for attention-deficit/hyperactivity disorder (ADHD). Psychological Assessment, 29(12), 1458–1465. https://doi.org/10.1037/pas0000436
  • Shura, R. D., Sapp, A., Ingram, P. B., & Brearly, T. W. (2024). Evaluation of telehealth administration of MMPI symptom validity scales. Journal of Clinical and Experimental Neuropsychology, 1–9. Advance online publication. https://doi.org/10.1080/13803395.2024.2314734
  • Suhr, J. A., & Berry, D. T. (2017). The importance of assessing for validity of symptom report and performance in attention deficit/hyperactivity disorder (ADHD): Introduction to the special section on noncredible presentation in ADHD. Psychological Assessment, 29(12), 1427. https://doi.org/10.1037/pas0000535
  • Sweet, J. J., Heilbronner, R. L., Morgan, J. E., Larrabee, G. J., Rohling, M. L., Boone, K. B., Kirkwood, M. W., Schroeder, R. W., & Suhr, J. A. (2021). American Academy of Clinical Neuropsychology (AACN) 2021 consensus statement on validity assessment: Update of the 2009 AACN consensus conference statement on neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 35(6), 1053–1106. https://doi.org/10.1080/13854046.2021.1896036
  • Watson, J. D., Pierce, B. S., Tyler, C. M., Donovan, E. K., Merced, K., Mallon, M., Autler, A., & Perrin, P. B. (2023). Barriers and facilitators to psychologists’ telepsychology uptake during the beginning of the COVID-19 pandemic. International Journal of Environmental Research and Public Health, 20(8), 5467. PMID: 37107748; PMCID: PMC10139141. https://doi.org/10.3390/ijerph20085467
  • White, D. J., Ovsiew, G. P., Rhoads, T., Resch, Z. J., Lee, M., Oh, A. J., & Soble, J. R. (2020). The divergent roles of symptom and performance validity in the assessment of ADHD. Journal of Attention Disorders, 26(1), 101–108. https://doi.org/10.1177/1087054720964575

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.