429
Views
11
CrossRef citations to date
0
Altmetric
APPLIED STUDIES

Measuring and diagnosing unilateral neglect: a standardized statistical procedure

, &
Pages 1248-1267 | Received 12 Jan 2017, Accepted 26 Jun 2017, Published online: 25 Jul 2017
 

Abstract

Objective: Unilateral neglect is usually investigated by adminstering stimuli (targets) in different positions, with targets being responded to by the patient (Hit) or omitted. In spite of this homogeneity of data type, neglect indices and diagnostic criteria vary considerably, causing inconsistencies in both clinical and experimental settings. We aimed at deriving a standard analysis which would apply to all tasks sharing this data form. Methods: A-priori theoretical reasoning demonstrated that the mean position of Hits in space (MPH) is an optimal index for correctly diagnosing and quantifying neglect. Crucially MPH eliminates the confounding effects of deficits that are different from neglect (non-lateral) but which decrease Hit rate. We ran a Monte Carlo study to assess MPH’s (so far overlooked) statistical behavior as a function of numbers of targets and Hits. Results: While average MPH was indeed insensitive to non-lateral deficits, MPH’s variance (like that of all other neglect indices) increased dramatically with increasing non-lateral deficits. This instability would lead to alarmingly high false-positive rates (FPRs) when applying a classical diagnostic procedure that compares one patient with a control sample. We solved the problem by developing an equation that takes into account MPH instability and provides correct cut-offs and close-to-nominal FPRs, even without control subjects. We developed a computerized program which, given the raw data, yields the MPH, a z-score and a p-value. Conclusions: We provided a standard method that allows clinical and experimental neuropsychologists to diagnose and measure neglect in a consistent way across the vast majority of tasks.

Acknowledgments

We thank Robin Hilsabeck and the two anonymous referees for their thoughtful comments, and Justin Harris for his kind help in revising the English. This paper is dedicated to the memory of Emanuela Radice and Alessandro Latocca.

Notes

1. We often use ‘detection’ instead of ‘successful processing’ in the paper, albeit ‘detection’ would be inappropriate in some instances where the 1/0 score dichotomy applies (e.g. when the task is recall from memory).

2. R-L compares the two halves of the display. One can divide space in more than two sectors and compute a regression of % Hits against the sectors’ spatial positions; in the limit one may take the Hit/Miss score on each single target, plot it against position, and fit the cloud of points with something like a logistic curve. Butler, Eskes, and Vandorpe (Citation2004) did so and quantified neglect in terms of the slope of the curve. However exactly the same logical limits affecting the simple R–L difference (Table ) also affect regression slope. See later for further discussion of this parameter.

3. By this expression we mean that either sheer intuition or more formalized models of neglect indicate a specific order of neglect severity. We wished not to commit to any general or specific theory of neglect in the present paper in order to keep the validity of our analysis as general as possible.

4. Location varies in the domain ±∞, not in the (−.5, .5) space of the display; ceiling is the top hit rate in the ±∞ domain, or the upper asymptote of the curve; slope is relative to ceiling: it means how ‘fast’ the Hit rate drops towards zero starting from ceiling (not from 1).

5. Also ceiling is constant. Recall that ceiling is the upper asymptote: at +∞, it reaches 1 for all three patients.

6. This is a reasonable assumption, since double dissociations between neglect and many other deficits potentially affecting target detection have been shown. About terminology, if one is studying the vertical dimension, the term ‘non-altitudinal’ would be more correct than ‘non-lateral’. However we use the term ‘non-lateral’ all across the paper for clarity’s sake.

7. Suppose that on a cancellation task with distractors, in position x the level of neglect (the ‘lateral’ deficit) is such that 50% of stimuli are missed. Suppose also that because of mild visual agnosia (the ‘non-lateral’ deficit) the patient correctly processes the shape of a target with 70% probability. The final Hit rate in position x is .5 × .7 = .35 – one has both to spatially select (p = .5) and to correctly process the shape (p = .7) of a target in order to cancel it. By changing position, the probability of successful spatial parsing changes (it increases rightwards and decreases leftwards) while the probability of successful shape processing is constant across space, p = .7.

8. If one accepted the idea that G-I profiles can reflect different degrees of neglect, the differences been those profiles would become entirely ambiguous: they might reflect differences in neglect severity, differences in non-lateral deficits, or differences in both. Interpretation of performances D-F would be affected by the same ambiguity.

9. The comparisons we carried out were between patients differing for only one parameter of the curve at a time. We did not discuss the (virtually infinite) cases where differences in more than one parameter combine: their complexity would have made any decision as to the correct order of neglect severity impossible without an explicit neglect theory, and we wished to avoid committing to any such a theory, in order to keep our statistical model as general as possible. Hence we were satisfied that the chosen index, MPH, could correctly differentiate the equivalence classes obtained from variation of single parameters of the curve.

10. The instability of R–L difference and Accuracy is maximal when (average) Hit rate = .5 and minimal when it approaches 0 or 1. L/TOT’s instability, like MPH’s, is minimal for (average) Hit rate close to 1 and maximal when it approaches 0.

11. Rorden and Karnath (Citation2010) obtained cut-offs for CoC from 53 control patients without neglect. The sample’s standard deviation was .0313, hence a normal range with bidirectional FPR = 2% is .146 wide. However the CoC scale is (−1, 1) so in terms of the MPH scale (−.5, .5) the normal range covers 7.3% of the display width.

12. We have a dramatic, empirical example of a false positive. One of the 199 neurologically intact subjects who performed Diller and Weinberg’s (Citation1977) cancellation task (see the section ‘Normal subjects only vary for Hit rate …’) had an exceptionally low Hit rate, .722. Taking as a ‘standardization sample’ the 197 subjects whose detection rate was at least .9, his z-score for the MPH was +5.389 which represents strong evidence of left neglect. Our method, which takes into account the large increase in MPH variance due to decrease in overall detection rate, led to a z-score of .703 – perfectly within normal variability.

13. Call the test(s) used for filtering the patients of the control group ‘F’ (for ‘Filtering’); call the test to be standardized ‘ST’. If F and ST are the same test, the argument is circular: one would be using ST to select control subjects that are needed to standardize ST. Also if F is just similar (correlated) to ST the argument is circular: one would be using variation in F, that is shared by ST, to select subjects needed to standardize ST. Else, if F is completely different from (uncorrelated with) ST, the argument is a regression at infinity: F was used in the process of standardization of ST, but in order for F to be valid, also F should be standardized, so another, different test F1 should be used to standardize F, and so on.

14. The only truly safe way of excluding subclinical neglect is to exclude all patients with brain damage, that is, to use only normal subjects.

15. The Equation works perfectly well also for T = 256 – the upper limit of targets in the Worksheet for automatic computation.

16. Data were collected from the electronic archives of many different experimental and clinical studies carried out by one of us (AT) across several years (1994–2013). Demographics could be traced back for 76% of the subjects.

17. There was a clear outlier, with z = 3.318. This subject missed 11/108 targets, 10 on the left and 1 on the right display half. The absolute deviation of his MPH was minor (+.03 or 3% of the display width), however even excluding him on suspicion of some undetected minor brain damage (a legitimate move: H1 specifies σ > 1 with Gaussian shape, and not that there are outliers) group mean = −.076 (t(63) = .611, p = .543, BF = 9210) and standard deviation = .999 (χ2(63) = 62.9, one-tailed p = .48, BF = 225.12).

18. In more general terms: if the processes that induce neglect when lesioned, fail to parse a given target, a Hit is impossible.

19. If eccentricity effects are markedly left-right asymmetrical, this would cause inflation of both FPR and FNR.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 462.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.