1,747
Views
6
CrossRef citations to date
0
Altmetric
Original Articles

Method of assessment and symptom reporting in veterans with mild traumatic brain injury

, , &
Pages 1-11 | Received 29 Jan 2014, Accepted 05 Sep 2014, Published online: 09 Jan 2015

Abstract

Objectives: We hypothesized that in a sample of veterans (1) frequency and consistency of post-concussive symptom endorsement would differ across assessment methods (detailed physician interview, brief screening interview, or self-report questionnaire checklist) and (2) that participants would endorse more symptoms on the self-report checklist than the screening interview or the physician interview. Methods: To assess the presence and severity of post-concussive symptoms, veterans and current military service members were recruited via newspaper advertisement for a research project to assess history of traumatic brain injury. Participants underwent evaluation, including a brief screening interview (the Rehabilitation Institute of Chicago Military Traumatic Brain Injury Screening Instrument), a detailed physician interview, and completion of a self-report questionnaire (the Rivermead Post-concussion Questionnaire). Results: Symptom reporting significantly differed across assessment methods for headaches [Q(2) = 65.45, p < .001], dizziness [Q(2) = 52.55, p < .001], and nausea [Q(2) = 58.58, p < .001]. Symptoms were most likely to be reported in a brief screening interview followed by the self-report questionnaire, followed by a physician interview. Consistency of symptom reporting also differed: reporting of dizziness was more discordant across assessment methods than reporting of nausea or headaches. Discussion: Our findings support our first hypothesis, but provided only partial support for our second hypothesis. That is, the data confirm that differences exist in post-concussion symptom reporting based on data gathering technique and type of symptom. Yet, contrary to our expectations, participants endorsed more symptoms during a brief screening interview than on a self-report questionnaire. These findings may have implications for optimizing assessment of complaints after concussion, especially within a veteran population.

Introduction

A collection of symptoms that persist after mild traumatic brain injury (mTBI) have been described as “post-concussion syndrome” and have been measured with a variety of assessment methods, including self-report questionnaires, brief screening interviews, and a detailed physician interview. In a study of a self-report questionnaire, Eyres, Carey, Gilworth, and Neumann (Citation2005) found that three post-concussive symptoms (headaches, nausea, and dizziness) frequently occur together and described that cluster as “predominantly physical” in distinction to other symptoms (e.g. irritability, frustration, and depression) that they described as “psychological”. Although the syndrome references a specific physical injury in its name, and although Eyres et al. proposed that headaches, nausea, and dizziness in particular have a specifically physical character, research has determined that the symptoms they reference are nonspecific; they are frequently reported by individuals with other disorders or by people who are neurologically intact (Gunstad & Suhr, Citation2002; Iverson & Lange, Citation2003; Iverson, Zasler, & Lange, Citation2007; Wang, Chan, & Deng, Citation2006). Laborey et al. (Citation2014) reported that those three symptoms were endorsed by 13–37% of a non-concussed medical sample with mostly orthopedic injuries who underwent a brief screening interview three months after treatment at a hospital. Iverson and Lange (Citation2003) reported that those three symptoms were endorsed by 38–52% of healthy individuals who completed a self-report form.

Despite the lack of clarity about the etiology of these symptoms, psychologists and other behavioral health clinicians routinely assess such symptoms following concussion by questionnaire or interview. For example, in the Post-Deployment Health Assessment, returning US military personnel answer questions regarding concussion and post-concussive symptoms first via an electronic questionnaire and then in a face-to-face interview (Department of Veterans Affairs & Department of Defense, Citation2009; Terrio, Nelson, Betthauser, Harwood, & Brenner, Citation2011). Others have presented recommendations for how to use self-report questionnaires as a means of screening for persistent post-concussive symptoms (Chan, Citation2005; Eyres et al., Citation2005; Lannsjo, Geijerstam, Johansson, Bring, & Borg, Citation2009).

The validity of self-report has been examined at some length for both questionnaires (Sullivan & Garden, Citation2011) and clinical interviews (Vanderploeg, Groer, & Belanger, Citation2012). In the debate about the validity of retrospective self-report, some have asserted that the absence of corroborative medical data obtained at the time of injury leaves the accuracy of that self-report in doubt while others have argued that a careful use of self-report data is both appropriate and necessary for diagnosis of mTBI (Corrigan & Bogner, Citation2007a). While this debate continued, little attention was given to the effect of data collection methods on symptom reporting. Recently, however, researchers have begun to examine how inconsistency in symptom reporting may be due to differences in assessment methods and how this will have important implications for clinical practice.

Factors impacting symptom reporting

The method by which information is gathered can have an effect on the frequency of symptom endorsement and on the nature and severity of the symptoms reported. For example, a group of athletes who completed a written questionnaire checklist reported more concussion symptoms than another group of athletes who underwent a face-to-face interview that included the same checklist (read aloud by the examiner) (Krol, Mrazik, Naidu, Brooks, & Iverson, Citation2011). This difference was significant for several symptoms, including dizziness and headache. Additionally, those given written questionnaires reported higher severity levels for individual symptoms, and higher overall severity of symptoms. A significantly higher percentage of athletes endorsed dizziness and headache on paper than they did when interviewed by a physician. In this sample, the number of symptoms endorsed was higher among those who were questioned by a woman than those who were questioned by a man (Krol et al., Citation2011).

When participants who had a history of mTBI underwent a free-response symptom review (a series of open-ended questions regarding symptoms) and then subsequently completed a symptom checklist, they reported more symptoms on the checklist and often listed them as moderate or severe, although they had not endorsed them at all on the free-response symptom review (Iverson, Brooks, Ashton, & Lange, Citation2010). Villemure, Nolin, and Le Sage (Citation2011) similarly found in an mTBI population in the acute post-injury phase that symptom checklist methods of interview led to a higher number of symptoms reported than free response.

Nolin, Villemure, and Heroux (Citation2006) had a similar finding at follow-up one year post mTBI. Participants reported more symptoms via checklist than they did via free response. Some of those symptoms were reported only on the checklist and were never included in the free-response condition. This finding led the authors to conclude that participants did not associate those symptoms with their TBI until the questionnaire suggested the association (Nolin et al., Citation2006). Similarly, in a comparison of self-report questionnaires versus simulated interview with open-ended questions versus structured interview, Edmed and Sullivan reported that less symptom elicitation was observed in assessment methods that involved less prompting (e.g. interview with open-ended questions) (Edmed & Sullivan, Citation2014).

Study aims and hypothesis

As noted above, post-deployment screening for concussion is standard among military personnel, but a review of the literature did not show any study of the influence of data collection method on symptom reporting in this population. The present analysis explores factors that affect endorsement of post-concussive symptoms in a sample of veterans.

First, we hypothesized that the frequency of symptom reporting for headaches, dizziness, and nausea would differ across assessment methods (detailed physician interview versus brief screening interview versus self-report questionnaire checklist). In other words, we predicted that participants would not consistently endorse the same symptoms on different measures, because previous studies have shown similar inconsistencies (Corrigan & Bogner, Citation2007a; Iverson et al., Citation2010; Krol et al., Citation2011; Vanderploeg et al., Citation2012).

Second, we hypothesized that participants would endorse more symptoms on the self-report checklist questionnaire than during the brief screening interview or during the detailed physician interview. Previous studies have shown that compared to other reporting methods, checklists are associated with higher endorsement of symptoms (Corrigan & Bogner, Citation2007a; Iverson et al., Citation2010; Krol et al., Citation2011).

However, previous studies have not said which symptoms would be more likely to be affected by the administration method. As a purely exploratory analysis, it is examined if any of the symptoms are more or less likely to be consistently endorsed across administrations.

Methods

Sample

Veterans and current military service members were recruited through newspaper advertisement. Participants were seen at a freestanding rehabilitation hospital. After obtaining informed consent, all participants provided proof of military service through a current military ID, discharge papers, or a current identification card from the Department of Veterans Affairs. Volunteers were not allowed to participate if they were already known by research staff to have a history of TBI. Ultimately, we consented 367 participants. For this present analysis, participants were excluded if they a) had incomplete data sets or b) did not have a history of TBI as determined by a physician (see below). Two physicians regularly staffed the research clinic, and the majority of participants were seen by one of these physicians as a part of the protocol (see below). In order to increase internal validity by eliminating any potential interviewer anomalies, 20 participants were omitted from the present analysis because a substitute physician conducted their interviews. Ultimately, 191 participants were determined to have sustained a TBI. A majority of our sample (90.2%) was determined to have sustained a mTBI by the physician, based on the duration of loss of consciousness (LOC) and post-traumatic amnesia (PTA). The remainder of the sample (9.6%) was determined to have sustained a moderate TBI. The data for these participants were ultimately included in the present analysis. The demographics for these 191 participants are given in .

Table 1. Demographic profile of study participants, n = 191.

Instruments: detailed physician interview

All participants underwent a 30-minute physician assessment that included a neurological examination and a detailed structured interview focusing on TBI. The physicians who conducted the interviews are physiatrists specializing in the treatment of TBI. In the structured interview, participants listed all the TBIs they sustained, both during and outside of military service. Based on the participant's retrospective account, physicians diagnosed TBI in those cases in which participants indicated they experienced head trauma and subsequent PTA, or LOC, or reported neuroimaging abnormalities or contemporaneous symptoms of cerebral dysfunction. When determining whether LOC occurred, physicians asked, “Was there a period of time you were knocked-out after your injury?” When evaluating for PTA, participants were asked, “Was there a period of time just before or after the injury that you were awake but have no memory of what happened?” If the participant responded in the affirmative, the physician further questioned for details of the loss of memory. If the participant gave a positive report of LOC or PTA, the physician prompted for further information, including duration. As there were no medical records available for review, the physician used their clinical judgment to assess the validity of patient self-reports. A physiatrist specializing in TBI created the structured interview, and a part of the interview includes a list of post-concussive symptoms selected by the physiatrist. For those diagnosed with TBI, the physician interview also included administration of the checklist of post-concussive symptoms, including headaches, dizziness, and nausea as well as 14 others. Physicians asked “Have you experienced any of the following since the event?” and read the checklist aloud, noting any symptoms the participant endorsed.

Instruments: brief screening interview

One of four research assistants administered the Rehabilitation Institute of Chicago Military Traumatic Brain Injury Screening Instrument to each participant (Zollman, Starr, Kondiles, Cyborski, & Larson, Citation2014). Research personnel administered the screening by reading items verbatim and recording responses by participants. The first items establish if a head injury has occurred. If a participant reported having experienced a head injury, research assistants asked “Have you experienced any of the following since the event?” and read aloud a checklist of post-concussive symptoms, including headaches, dizziness, nausea, and 10 others, noting any symptoms the participant endorsed. If no head injury was reported, the checklist was not administered. Only research assistants were aware of participant responses to the screening. The clinicians who administered other measures were blinded to screening results to avoid biasing their evaluations. Time to administer the screening averaged 5–10 minutes per participant.

Instruments: self-report questionnaire

The Rivermead Post-concussion Questionnaire (RPQ) (King, Crawford, Wenden, Moss, & Wade, Citation1995) is a written self-report measure listing 16 post-concussive complaints, which the participant rates in severity. Since instructions specify that it should be administered only to individuals with a history of concussion, it was given only to those participants who reported in the structured physician interview that they sustained a concussion. The written instructions to the respondent specify, “Compared with before the accident, do you now suffer from … .” The first three symptoms are headaches, dizziness, and nausea/vomiting. Participants rate their symptoms on a scale of 0 (not experienced) to 4 (severe problem). Two factor scores have been validated: the RPQ-3 (the first three items) and the RPQ-13 (the remaining items) (Eyres et al., Citation2005). For our analyses, we used the RPQ-3 (headaches, dizziness, and nausea) and a scoring procedure recommended by Lannsjo et al. (Citation2009), wherein ratings of 0 and 1 are negative endorsements of the symptoms and ratings of 2 or above are counted as positive endorsements.

Symptom comparison

Ultimately, participants reporting a head injury were asked about their symptoms three times: during a brief interview by a research assistant, by a physician during a detailed interview, and by the Rivermead self-report questionnaire. The symptoms of headache, nausea, and dizziness were asked across all three methods of administration. For the present analysis, these three symptoms were the only ones compared.

Statistical analysis

Data analyses were conducted using SPSS 17.0 for Windows. For data analysis, we used the variables, headaches (yes or no), nausea (yes or no), and dizziness (yes or no), that were reported in the three different assessment methods that each participant partook in: the detailed physician interview, the brief screening interview, and the self-report questionnaire checklist (RPQ). Thus, each participant had three different reports of headaches, nausea, and dizziness that we compared. Participants’ responses were additionally marked as either concordant (a participant's endorsement of the symptom was the same across all three assessment methods) or discordant (a participant's endorsement of a symptom for one assessment method differed from the other two). For example, if a participant reported in the brief screening interview that they had headaches but did not indicate on the self-report questionnaire that they had headaches and did not tell the doctor during the detailed physician interview that they had headaches, they were marked as discordant, while a participant who reported headaches to all three was marked as concordant.

In order to test the first hypothesis, that the frequency of symptom report for headaches, dizziness, and nausea would significantly differ depending on the assessment method, within-subject Cochran's Q tests were used, in order to detect if three or more groups of nominal dependent data were significantly different (in other words, a Cochran's Q test is the nonparametric equivalent to ANOVAs). To conduct an exploratory analysis testing if consistency in symptoms reporting was greater for headache and dizziness than for nausea, cases were marked as being concordant if a symptom was reported across all three assessment methods or as discordant if report differed in one of the three methods, and McNemar tests were employed to test for significance. McNemar tests are essentially a paired version of the chi-squared test, and are used to detect significant differences between three or more groups of nominal data.

To test the second hypothesis, that symptom reporting would be the highest on the self-report questionnaire when compared to the physician interview and the screening interview, post hoc McNemar tests with Bonferroni correction for multiple comparisons (corrected significance level = .02) were used, similar to the way in which a post hoc analysis would be used after a parametric test like an ANOVA. Within-subject nonparametric tests (the Cochran's Q and McNemar tests) were employed rather than a parametric test such as the repeated measure ANOVA due to binomial nominal dependent data (yes/no answers to symptoms). Both tests are more conservative than traditional parametric tests like the ANOVA (i.e. there is a smaller chance of making a type 1 error, or rejecting the null hypothesis when it is not correct to do so).

Results

Participant characteristics

This study employed a within-subjects design in which all participants provided data for each of the three assessment methods. Ninety percent of the sample was male (see ) Mean age was 51, and a majority of participants (79%) were classified as black or African-American. The majority of the study sample had graduated from high school (87%), and 50% had attended at least 1 year of college, with 15% graduating with a 4-year degree (mean years of education = 14).

Hypothesis 1: symptom reporting consistency

We first hypothesized that the frequency of symptom reporting for headaches, dizziness, and nausea would differ across assessment methods. Cochran's Q analyses supported this hypothesis: symptom reporting significantly differed across administration method for all three symptoms assessed in the RPQ-3: headaches [Q(2) = 65.45, p < .001], dizziness [Q(2) = 52.55, p < .001], and nausea [Q(2) = 58.58, p < .001] (see ).

Figure 1. Participant symptom endorsement (%) by assessment method (n = 191). Note: There was a significant difference in symptom endorsement by assessment method at p < .01 for all symptoms.

Figure 1. Participant symptom endorsement (%) by assessment method (n = 191). Note: There was a significant difference in symptom endorsement by assessment method at p < .01 for all symptoms.

To explore whether specific symptoms have more inconsistent endorsement across assessment methods, cases were marked as either being concordant (a participant's endorsement was the same across all three assessment methods) or discordant (a participant's endorsement for one symptom method differed from that of the other two). The frequency of discordance (percentage of individuals whose endorsement of a particular symptom was not the same across all three administration methods) was the highest for dizziness (see ). Between-subject comparisons for each symptom revealed that education, race, or gender did not differ between those participants who gave discordant responses than those with concordant responses. However, participants with discordant responses were significantly older than those with concordant responses for headache report [t(189) = −2.30, p = .02], dizziness report [t(189) = −2.28, p = .02], and nausea report [t(189) = −2.42, p = .02].

Table 2. Percentage of participants with discordant symptom report (n = 191).

Hypothesis 2: symptom reporting consistency by method

Our second hypothesis was that participants would endorse more symptoms on self-report questionnaire than in a brief screen interview and in the detailed physician interview. Consistent with this hypothesis, we found endorsement for each of the three symptoms differed across assessment methods. Our hypothesis about which assessment method would yield the highest endorsement rates for each symptom was partially supported. As hypothesized, post hoc comparisons showed that participants were more likely to report symptoms in a brief screening interview than in a detailed physician interview and participants were more likely to report symptoms on the self-report questionnaire than in a physician interview (see ). Contrary to our hypothesis, participants endorsed symptoms more often on a brief screening interview than on a self-report questionnaire. To summarize, endorsement was most frequent in the brief screening interview, next most frequent on the self-report questionnaire and least frequent in the detailed physician interview. This pattern was observed for all three symptoms included in our analyses: headaches, dizziness, and nausea, each of which is detailed below.

Symptom reporting of headaches differed across assessment methods: 72% of participants endorsed headaches in the brief screening interview versus 58% on the self-report questionnaire versus 40% in the detailed physician interview. McNemar tests found that the frequency of endorsement of headaches was significantly different for each of the three pairwise comparisons of assessment methods; brief screening interview versus self-report questionnaire (x2 = 16.45, p ≤ .001), self-report questionnaire versus detailed physician interview (x2 = 18.15, p  ≤ .001), and brief screening interview versus detailed physician interview (x2 = 51.19, p  ≤ .001).

Symptom reporting for dizziness also differed across assessment methods: 58% in the brief screening interview versus 49% in the self-report questionnaire versus 28% in the detailed physician interview. Frequency of endorsement of dizziness was significantly different for each of the three pairwise comparisons of assessment methods: brief screening interview versus self-report questionnaire (x2 = 3.75, p = .05), self-report questionnaire versus detailed physician interview (x2 = 24.53, p  ≤ .001), and brief screening interview versus detailed physician interview (x2 = 44.49, p  ≤ .001).

Symptom reporting for nausea also differed across assessment methods: 38% in the brief screening interview versus 23% in the self-report questionnaire versus 10% in the detailed physician interview. Frequency of endorsement of nausea was significantly different for each of the three pairwise comparisons of assessment methods; brief screening interview versus self-report questionnaire (x2 = 15.85, p  ≤ .001), self-report questionnaire versus detailed physician interview (x2 = 18.58, p  ≤ .001), and brief screening interview versus detailed physician interview (x2 = 40.36, p  ≤ .001).

Discussion

Although it has been argued that post-concussive symptoms are nonspecific and that health psychologists and other clinicians who work with concussion patients cannot conclusively attribute their symptoms to concussion, large numbers of military personnel are undergoing post-deployment concussion screening (Fear et al., Citation2009). It has been suggested that these nonspecific symptoms require treatment, even while their precise etiology remains uncertain (Brenner, Vanderploeg, & Terrio, Citation2009). That suggestion was based on experience of clinicians who have found that effective treatment planning is sometimes possible when the cause of a symptom is unclear. But the experience of clinicians has shown that it is much more difficult to plan treatment when the presence of symptoms is uncertain (i.e. the patient endorses a symptom in one context but denies it in another). When different assessment methods yield inconsistent results regarding behavioral health issues, clinicians lack clear direction about how to proceed.

Until now there has been little discussion of how discordant assessment findings impact care of veterans, although this issue has been studied in other populations. For example, there are reports that method of administration of instruments assessing post-concussive symptoms influences findings in athletes; higher symptom numbers were reported in self-report versus a physician interview (Krol et al., Citation2011). The goal of the present study was to determine if assessment method also influences post-concussive symptom report in veterans. Given the widespread use of new assessment instruments (e.g. post-deployment concussion screenings) in that population, such a study is urgently needed.

Our examination of our first hypothesis showed that the frequency of post-concussive symptom endorsement varied across assessment methods. This is consistent with previous reports (Iverson et al., Citation2010; Krol et al., Citation2011; Nolin et al., Citation2006; Villemure et al., Citation2011). Further study is needed to assess how the frequency of symptom endorsement across assessment methods changes over time in a veteran population. Villemure et al found that regardless of the form of interview (free response or checklist), at three months after the injury, fewer symptoms were reported than at one week after the injury (Villemure et al., Citation2011). This is consistent with the typical resolution of post-concussion symptoms over time, and shows that any over- or underreporting caused by assessment method follows the normal evolution of the disorder.

Discordance of symptom reporting differed between symptoms. Pairwise comparisons showed that endorsement of nausea was less discordant than endorsement of dizziness. It is possible that nausea is less likely to be misattributed, misremembered, or misunderstood, and thus more consistently reported.

Examination of our second hypothesis showed that, unexpectedly, participants endorsed fewer post-concussive symptoms on a self-report questionnaire checklist than in a brief screening interview. Despite this, as expected they reported more symptoms on the self-report questionnaire than in a physician interview. This is partially consistent with the report by Krol et al. (Citation2011), which found that participants endorse more symptoms on a symptom checklist administered in a written format than on the same symptom checklist administered in an interview format.

The literature suggests that in general participants will give more accurate information on a more anonymous interview when the questions are more sensitive. It is possible that in face-to-face interviews, symptom endorsement is affected by social desirability. It has long been established that the desire to be viewed favorably affects the information conveyed by a research participant to an interviewer (Holtgraves, Citation2004). Findings specific to post-concussive symptoms and social desirability are not reported in the current literature. Krol et al did not establish which method (written versus spoken administration) produced the most accurate answers (Krol et al., Citation2011).

Iverson et al. (Citation2010) suggested that people tend to over-report on questionnaires for a variety of potential reasons: nocebo effect, expectation, nonspecific symptom endorsement, or exaggeration for gain. Iverson's research in this area followed up on a report by Mittenberg, DiGiulio, Perrin, and Bass (Citation1992) that symptom development may be in part due to the expectation that certain symptoms occur after head injury. Expectation effects may have influenced the present study, since participants were informed that they were taking part in a study that screened for TBI and since they responded to post-concussive symptom checklists several times. However, in the present study clinicians saw participants in a round-robin format, such that while participants were always seen first by a research assistant, the order of their subsequent assessments always varied. In other words, all participants first took the research assistant's brief interview, but then some had a physician's interview next, while others had the self-report questionnaire. To some degree, this format should have prevented the order of administration from biasing results. Further, if any priming or biasing occurred by the research assistant's brief screening interview we would expect that the two subsequent assessments (physician interview and self-report questionnaire) to show higher numbers of reported symptoms. However, the opposite occurred; the highest symptom reports occurred with the first interview, the brief screening interview.

There are several other limitations of the current study. First, our participants did not provide any documentation to confirm their history of concussion. It is possible that self-report includes inaccuracies although it has been suggested that as long as self-reports are scrutinized by an experienced clinician, they may be adequate for diagnosing past injuries (Corrigan & Bogner, Citation2007a, Citation2007b; Hoge, Goldberg, & Castro, Citation2009). Future studies should obtain documentation of injury and use a validated structured interview. Second, some of the participants in the present study reported on injuries that occurred several decades (an average of 24 years) in the past, while others recalled injuries that were only a few months prior to evaluation. While it is unlikely that the remoteness of injury would contribute to discrepancies in symptoms endorsement across different assessment administration methods, future studies may address this concern by focusing on a more homogenous sample (e.g. veterans of the Iraq and Afghanistan conflicts). Finally, because this is a veteran population, there is the also the possibility of post-traumatic stress disorder (PTSD) as a concurrent diagnosis. Further studies should evaluate if symptoms of PTSD are also affected by method of administration.

The present findings add to past studies of influences on clinical assessment of post-concussive symptoms. Our findings show that assessment administration method affects symptom endorsement and that discordant reporting is observed more often in some post-concussive symptoms (e.g. dizziness) than others (e.g. nausea). The primary clinical application is that clinicians should not treat all assessment methods as equivalent or interchangeable. This supports the Veterans Health Administration policy that service members who screen positive for concussion must undergo a detailed interview/exam by a clinical team at a VA polytrauma center or by other clinicians with specialized training, such as a neurologist, physiatrist, or neuropsychiatrist (Department of Veterans Affairs, VHA, Citation2010). The present findings suggest that a similar practice should be followed in civilian settings: positive screenings, which are primarily based on symptom endorsement, require confirmation by a detailed examination and should not be treated as a confirmed diagnosis.

Funding

This study was funded by grants from the Robert M. McCormick Tribune Foundation, the Julius N. Frankel Foundation, the Joseph G. Nicholas Foundation, and the Barker Welfare Foundation.

References

  • Brenner, L. A., Vanderploeg, R. D., & Terrio, H. (2009). Assessment and diagnosis of mild traumatic brain injury, posttraumatic stress disorder, and other polytrauma conditions: Burden of adversity hypothesis. Rehabilitation Psychology, 54(3), 239–246. doi: 10.1037/a0016908
  • Chan, R. C. (2005). How severe should symptoms be before someone is said to be suffering from post-concussion syndrome? An exploratory study with self-reported checklist using Rasch analysis. Brain Injury, 19(13), 1117–1124. doi: 10.1080/026990500150088
  • Corrigan, J. D., & Bogner, J. (2007a). Screening and identification of TBI. Journal of Head Trauma Rehabilitation, 22(6), 315–317. doi: 10.1097/01.HTR.0000300226.67748.3e
  • Corrigan, J. D., & Bogner, J. (2007b). Initial reliability and validity of the Ohio state university TBI identification method. Journal of Head Trauma Rehabilitation, 22(6), 318–329. doi: 10.1097/01.HTR.0000300227.67748.77
  • Department of Veterans Affairs, & Department of Defense. (2009). VA/DoD clinical practice guideline for management of concussion/mild traumatic brain injury (mTBI). Washington, DC: Author.
  • Department of Veterans Affairs, VHA. (2010). Screening and evaluation of possible traumatic brain injury in Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) Veterans. In VHA Directive 2010–012. Washington, DC: Author.
  • Edmed, S. L., & Sullivan, K. A. (2014). Method of symptom assessment influences cognitive, affective and somatic post-concussion-like symptom base rates. Brain Injury, 15, 1–6.
  • Eyres, S., Carey, A., Gilworth, G., & Neumann, V. (2005). Construct validity and reliability of the Rivermead post-concussion symptoms questionnaire. Clinical Rehabilitation, 19(8), 878–887. doi: 10.1191/0269215505cr905oa
  • Fear, N. T., Jones, E., Groom, M., Greenberg, N., Hull, L., Hodgetts, T. J., & Wessely, S. (2009). Symptoms of post-concussional syndrome are non-specifically related to mild traumatic brain injury in UK armed forces personnel on return from deployment in Iraq: An analysis of self-reported data. Psychological Medicine, 39(8), 1379–1387. doi: 10.1017/S0033291708004595
  • Gunstad, J., & Suhr, J. A. (2002). Perception of illness: Nonspecificity of postconcussion syndrome symptom expectation. Journal of the International Neuropsychological Society, 8(1), 37–47. doi: 10.1017/S1355617702811043
  • Hoge, C. W., Goldberg, H. M., & Castro, C. A. (2009). Care of war veterans with mild traumatic brain injury – flawed perspectives. New England Journal of Medicine, 360(16), 1588–1591. doi: 10.1056/NEJMp0810606
  • Holtgraves, T. (2004). Social desirability and self-reports: Testing models of socially desirable responding. Personality and Social Psychology Bulletin, 30(2), 161–172. doi: 10.1177/0146167203259930
  • Iverson, G. L., Brooks, B. L., Ashton, V. L., & Lange, R. T. (2010). Interview versus questionnaire symptom reporting in people with the postconcussion syndrome. Journal of Head Trauma Rehabilitation, 25(1), 23–30. doi: 10.1097/HTR.0b013e3181b4b6ab
  • Iverson, G. L., & Lange, R. T. (2003). Examination of “postconcussion-like” symptoms in a healthy sample. Applied Neuropsychology, 10(3), 137–144. doi: 10.1207/S15324826AN1003_02
  • Iverson, G. L., Zasler, N. D., & Lange, R. T. (2007). Post-concussive disorder. In N. D. Zasler, D. I. Katz, & R. D. Zafonte (Eds.), Brain injury medicine: Principles and practice (pp. 373–405). New York, NY: Demos.
  • King, N. S., Crawford, S., Wenden, F. J., Moss, N. E. G., & Wade, D. T. (1995). The Rivermead post concussion symptoms questionnaire: A measure of symptoms commonly experienced after head injury and its reliability. Journal of Neurology, 242(9), 587–592. doi: 10.1007/BF00868811
  • Krol, A. L., Mrazik, M., Naidu, D., Brooks, B. L., & Iverson, G. L. (2011). Assessment of symptoms in a concussion management programme: Method influences outcome. Brain Injury, 25(13–14), 1300–1305. doi: 10.3109/02699052.2011.624571
  • Laborey, M., Masson, F., Ribéreau-Gayon, R., Zongo, D., Salmi, L. R., & Lagarde, E. (2014). Specificity of postconcussion symptoms at 3 months after mild traumatic brain injury: Results from a comparative cohort study. Journal of Head Trauma Rehabilitation, 29(1), E28–E36. doi: 10.1097/HTR.0b013e318280f896
  • Lannsjo, M., Geijerstam, J.-L., Johansson, U., Bring, J., & Borg, J. (2009). Prevalence and structure of symptoms at 3 months after mild traumatic brain injury in a national cohort. Brain Injury, 23(3), 213–219. doi: 10.1080/02699050902748356
  • Mittenberg, W., DiGiulio, D. V., Perrin, S., & Bass, A. E. (1992). Symptoms following mild head injury: Expectation as aetiology. Journal of Neurology, Neurosurgery and Psychiatry, 55(3), 200–204. doi: 10.1136/jnnp.55.3.200
  • Nolin, P., Villemure, R., & Heroux, L. (2006). Determining long-term symptoms following mild traumatic brain injury: Method of interview affects self-report. Brain Injury, 20(11), 1147–1154. doi: 10.1080/02699050601049247
  • Sullivan, K., & Garden, N. (2011). A comparison of the psychometric properties of 4 postconcussion syndrome measures in a nonclinical sample. Journal of Head Trauma Rehabilitation, 26(2), 170–176. doi: 10.1097/HTR.0b013e3181e47f95
  • Terrio, H. P., Nelson, L. A., Betthauser, L. M., Harwood, J. E., & Brenner, L. A. (2011). Postdeployment traumatic brain injury screening questions: Sensitivity, specificity, and predictive values in returning soldiers. Rehabilitation Psychology, 56(1), 26–31. doi: 10.1037/a0022685
  • Vanderploeg, R. D., Groer, S., & Belanger, H. G. (2012). Initial developmental process of a VA semistructured clinical interview for TBI identification. Journal of Rehabilitation Research and Development, 49(4), 545–556. doi: 10.1682/JRRD.2011.04.0069
  • Villemure, R., Nolin, P., & Le Sage, N. (2011). Self-reported symptoms during post-mild traumatic brain injury in acute phase: Influence of interviewing method. Brain Injury, 25(1), 53–64. doi: 10.3109/02699052.2010.531881
  • Wang, Y., Chan, R. C., & Deng, Y. (2006). Examination of postconcussion-like symptoms in healthy university students: Relationships to subjective and objective neuropsychological function performance. Archives of Clinical Neuropsychology, 21(4), 339–347. doi: 10.1016/j.acn.2006.03.006
  • Zollman, F. S., Starr, C., Kondiles, B., Cyborski, C., & Larson, E. B. (2014). The rehabilitation institute of Chicago military traumatic brain injury screening instrument: Determination of sensitivity, specificity, and predictive value. Journal of Head Trauma Rehabilitation, 29(1), 99–107. doi: 10.1097/HTR.0b013e318294dd37