339
Views
0
CrossRef citations to date
0
Altmetric
Review Article

Identifying spatial neglect – an updated systematic review of the psychometric properties of assessment tools in adults post-stroke

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, , ORCID Icon & ORCID Icon show all
Received 24 Oct 2023, Accepted 15 Apr 2024, Published online: 10 May 2024

ABSTRACT

Spatial neglect commonly occurs after a stroke, resulting in diverse impacts depending on the type and severity. There are almost 300 tools for assessing neglect, yet there is a lack of knowledge on the psychometric properties of these tools. The objective of this systematic review, registered on Prospero (CRD42021271779), was to determine the quality of the evidence for assessing spatial neglect, categorized by neglect subtype. The following databases were searched on 3rd May 2022 from database inception: Ovid Emcare, Embase, Ovid MEDLINE, APA PsycINFO, Web of Science (SCI-EXPANDED; SSCI; A&HCI; ESCI) and Scopus. All primary peer-reviewed studies (>5 participants) of adults post stroke, reporting any psychometric property of 33 commonly used neglect assessment tools were included. The COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) risk of bias tool was used to assess the methodological quality of the studies and summarize the psychometric properties of each tool. 164 articles were included, with a total of 12,463 people with stroke. The general quality of the evidence was poor and no one tool had high-quality evidence of both validity and reliability. Eleven tools show some promise as they meet the minimum criteria for good measurement properties for both validity and reliability.

Introduction

Spatial neglect, experienced by approximately 50% of all stroke survivors (Esposito et al., Citation2021; Pedroli et al., Citation2015), is characterized by difficulties with reporting, responding or orientating to stimuli from the side opposite the brain lesion, and this deficit is not attributable to a primary motor or sensory impairment (Heilman et al., Citation2003). Subtypes of neglect have been identified to capture the range of neglect manifestations observed in different domains such as: visual, tactile, auditory, motor, representation (mental imagery); and the spatial regions of personal (on the body), peripersonal (near space) and extrapersonal (far space), see (Williams et al., Citation2021) for a summary. Spatial neglect can also be categorized as either egocentric (body-centered) and/or allocentric (object-centered) as it relates to the frame of reference.

There are almost 300 tools for the screening or diagnosis of neglect (Williams et al., Citation2021), however there is a lack of knowledge of the psychometric properties of many of these tools. Many assessment tools underdiagnose neglect as they lack the sensitivity to detect mild or moderate neglect (Bonato et al., Citation2013; Pflugshaupt et al., Citation2004; Puig-Pijoan et al., Citation2018). A recent scoping review provided an overview of all the neglect tools and identified many tools that do not specify the neglect subtype assessed (Williams et al., Citation2021). This review analyzed if the scoring differentiated behaviours for specific neglect subtypes, however, did not review the psychometric properties. One systematic review has summarized the psychometric properties of 19 spatial neglect assessment tools (Menon & Korner-Bitensky, Citation2004). However, this previous review is outdated as many more tools and studies of psychometrics have been published in the subsequent 20 years. Additionally, this earlier review only included egocentric and perceptual components of neglect and did not include tools for representational, motor, allocentric, tactile, or auditory neglect (Menon & Korner-Bitensky, Citation2004). A recent international survey of tools used in current practice has recently been completed (Checketts et al., Citation2021). However, to our knowledge, no recent review has systematically compared relevant psychometrics such as validity and reliability across the varied tools used in clinical practice today.

Assessment tools with good psychometric properties are crucial for accurate and reliable measurement of spatial neglect. The psychometric properties of tools need to be rigorously tested to show that they measure what they intend to measure (content and construct validity), they can do this accurately (criterion validity, sensitivity and specificity), and consistently (reliability), and can detect changes over time (responsiveness) (Mokkink, Prinsen, et al., Citation2018). Using tools that do not have adequate validity and/or reliability, introduces randomness and imprecision. This can have detrimental consequences such as improper diagnosis, undetected neglect, or inappropriate interventions, thus negatively impacting rehabilitation outcomes (Kerkhoff & Schenk, Citation2012).

Tools with high test-retest reliability are essential to track change in neglect signs over time (Aldridge et al., Citation2017). However, measuring the reliability of neglect assessment tools is challenging, as reliability inherently refers to the ability of a tool to measure a stable construct over time (Mokkink et al., Citation2020). However, spatial neglect is not stable over time, as the severity of neglect signs can vary from day-to-day and even over the course of a therapy session (Anderson et al., Citation2000; Woods et al., Citation2017), and often improves over time. Therefore, studies exploring the reliability of tools to consistently identify neglect need to be carefully planned and implemented, taking into account the varying nature of neglect. A low value for a test-retest correlation may be due to the varying nature of neglect itself, rather than the variability in how it is measured.

This systematic review provides an update on the state of the evidence by critically appraising the psychometric properties of commonly used assessment tools for identifying spatial neglect in clinical practice. This review applied the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology for systematic reviews (Mokkink et al., Citation2020; Mokkink, de Vet, et al., Citation2018). A recent international survey for screening and assessment of neglect was used to identify the tools most commonly used in clinical practice (Checketts et al., Citation2021) for inclusion in this review.

The questions that this systematic review answered are:

  1. What are the psychometric properties (validity, reliability, responsiveness) of spatial neglect tools that are commonly used in clinical practice, categorized by neglect subtype?

  2. What is the feasibility of the spatial neglect tools? (Feasibility defined as the ease of administration and process, such as the time it takes to complete, resources and training required)

  3. What is the utility of the spatial neglect tools? (Does the tool provide useful and relevant information for clinicians, such as whether the tool specifies the neglect severity and what types of neglect can be identified?)

Methods

Protocol and registration

This systematic review followed the PRISMA 2020 statement: An updated guideline for reporting systematic reviews (Page et al., Citation2021), and the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology for systematic reviews (Mokkink, de Vet, et al., Citation2018; Mokkink et al., Citation2020). The review was registered on Prospero, the international prospective register of systematic reviews (registration number: CRD42021271779) and is available at: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021271779. This review was registered prior to the final selection of tools to be included in this review.

Inclusion of tools in this review:

Initially all spatial neglect screening and assessment tools were to be included. However, after almost 300 tools for the assessment of spatial neglect were determined in a recent scoping review (Williams et al., Citation2021), the criteria for choosing included tools were refined. The updated criteria included tools: (1) being used in clinical practice (identified from an international survey on current clinical practice (Checketts et al., Citation2021), or suggested for inclusion from a multi-disciplinary panel of experts International group for Spatial Attention and Neglect Disorders (I-SAND) (www.isand.org)); (2) designed to detect neglect of any type; (3) with evidence of standardization (clear documented protocols that are replicable); (4) some form of validity and reliability reported (determined through a preliminary search of the literature which included screening previous systematic reviews, review of the stroke engine website and Google scholar search of tool name until two full pages of irrelevant hits); (5) are transdisciplinary (Tools were excluded if they were specifically designed to be used by only one discipline. For example, the “Arnadottir OT-ADL Neurobehavioural Evaluation (A-ONE) Assessment is designed specifically for Occupational Therapists and as such the administration and scoring of the tool assumes prior knowledge of occupational therapy theory); (6) tools available in English. Refer to Online Resource 1 for the full list of all 33 included assessment tools, and the screening flowchart which outlines the inclusion of tools and the list of exclusion reasons.

Search strategy

A search strategy was developed based on the scoping review of spatial neglect subtypes, definitions and assessment tools (Williams et al., Citation2021), which was modified from a Cochrane review of spatial neglect (Bowen et al., Citation2013) by removing search strings aimed to find RCTs. The names and abbreviations of the included tools were added to the search strategy, along with a validated search filter for finding studies on measurement properties (Terwee et al., Citation2009). The search strategy was then reviewed by an academic librarian to ensure all relevant studies were likely to be found. The final search was completed on the following databases on 3rd May 2022, from database inception: Ovid Emcare, Embase, Ovid MEDLINE, APA PsycINFO, Web of Science (SCI-EXPANDED; SSCI; A&HCI; ESCI) and Scopus. Refer to Online Resource 2 for the full search strategy.

Eligibility criteria

This systematic review included primary studies reporting any psychometric property (defined by COSMIN) of the 33 included spatial neglect tools in adults’ post-stroke (with more than 85% of population with a diagnosis of stroke), and no restriction to setting or recovery stage post stroke. Studies were excluded if not peer reviewed (e.g., book chapters); were not written in English; had five participants or fewer; or if the sample included neglect resulting from a traumatic brain injury or tumour. Studies including children, or only healthy controls were also excluded. Two reviewers independently screened the titles and abstracts in Covidence for eligibility for inclusion. The full text of potentially eligible studies were retrieved and assessed for full eligibility independently by two researchers from the authorship team. Any discrepancies over the eligibility of a particular study were resolved by a third researcher or a team discussion.

Data extraction

A data extraction form was developed based on the COSMIN data extraction templates for systematic reviews (Terwee and Mokkink). Data were extracted from the full texts of potentially eligible studies by the first author. A selection of 46 studies were randomly selected to have the data extraction checked by an additional researcher; with 30 studies independently extracted, and 16 studies double-checked to ensure completeness and accuracy of extracted data. The agreement between the data extracted for the randomly selected 46 studies was 97.9% agreement. The following data items were extracted from each study (1) characteristics of the included patient sample; (2) characteristics of each tool, such as a description of each test/subtest, administration procedures, scoring system, utility and feasibility; and (3) measurement properties for each tool defined as per COSMIN (Mokkink et al., Citation2020; Mokkink, de Vet, et al., Citation2018). Refer to Online Resource 3 for the full list of data items extracted.

Evaluation of the content validity as per the COSMIN guidelines was considered not appropriate as this section of the guidelines is very specific to patient reported measures (Mokkink, Prinsen, et al., Citation2018), whereas the tools included in this review were all clinician reported or performance based measures. Additionally, it was considered not necessary to evaluate content validity as the determination of the constructs (neglect subtypes) assessed by each tool was completed in a recent scoping review (Williams et al., Citation2021). The descriptions of each neglect tool and the documented protocols were systematically analyzed by two authors of the scoping review to determine (1) the main neglect subtypes that are specifically identified by each tool, and (2) other subtypes that deficient performance may be attributed to, but the scoring does not allow for them to be differentiated (Williams et al., Citation2021). Refer to the scoping review for a full description of the methodology (Williams et al., Citation2021). A summary of the neglect subtypes assessed by each tool (from the scoping review) was added to the utility section in this systematic review as this provides context to clinicians and researchers on the information that each assessment provides.

Risk of bias assessment

The methodological quality and the quality of the results were evaluated for each study as per the COSMIN guidelines (Mokkink, de Vet, et al., Citation2018; Mokkink et al., Citation2020). The risk of bias checklist included an evaluation of methodological design, statistical methods and bias relating to other flaws relevant to each psychometric property. The risk of bias checklist was completed several times for each article if it included data for multiple psychometric properties, such as internal consistency, structural validity, reliability, and construct validity. Each standard in the COSMIN risk of bias checklist was rated on a four-point scale as either “very good”, “adequate”, “doubtful” or inadequate”. For a detailed explanation of the individual standards and guidelines, refer to pages 23–37 of the COSMIN risk of bias checklist (Mokkink et al., Citation2018), and Mokkink et al., for the updated standards of assessing reliability and measurement error (Citation2020). The overall methodological quality of each psychometric property reported in each study was then rated according to the “worst-score” counts method according to the COSMIN guidelines for the simple reason that poor methodological aspects cannot be overcome by good aspects (Mokkink, de Vet, et al., Citation2018). The risk of bias evaluation was completed by the first author for all studies, with 101 studies rated by another member of the research team, achieving 89.23% agreement. Any discrepancies were resolved by discussion until a consensus was reached.

Appraisal of measurement properties

Refer to for the definitions of the psychometric properties, according to COSMIN, that were appraised for all the included tools (Mokkink et al., Citation2018). The psychometric properties reported in the individual studies were compared against COSMIN criteria for good measurement properties, and rated as either sufficient, insufficient, or indeterminate (Refer to COSMIN manual for detailed criteria) (Mokkink, Prinsen, et al., Citation2018, pp. 28–29). The first author appraised the psychometric properties in all the studies, with 101 studies appraised by another member of the research team with 91.7% agreement. Any discrepancies were resolved by discussion until a consensus was reached.

Table 1. Definitions of the psychometric properties that were appraised for all the included tools, as per COSMIN guidelines.

As there is no gold standard tool for identifying spatial neglect, sensitivity analyses were unable to be compared or synthesized across the studies. COSMIN recommends that, in this case, sensitivity analyses be evaluated as evidence of construct validity rather than criterion validity where appropriate (Mokkink, de Vet, et al., Citation2018). Criterion validity was only considered when the sensitivity/specificity of a modified or simplified version of a tool was compared to the original full version.

Structural validity is only relevant for tools with more than one subtest, that are based on a reflective model, which assumes all items and subtests of the tool are manifestations of one underlying construct, and thus expected to be correlated (Mokkink, de Vet, et al., Citation2018). For this review, all tools with subtests were assumed to be based on a reflective model (if not otherwise stated), so structural validity was examined. The review team also decided that neglect tools would expect to be correlated as evidence for construct validity. However, as neglect is not considered to be a unitary syndrome due to many documented double dissociations between different neglect subtypes (Demeyere & Gillebert, Citation2019; Ortigue et al., Citation2006), it was expected that the correlations between tools assessing different neglect subtypes would not necessarily be ≥0.5, as COSMIN recommends for similar constructs. Therefore the criteria for sufficient construct validity were any significant correlations >0.3 with another neglect tool, as per the COSMIN recommendations for “related but dissimilar constructs’ that should have correlations between 0.3 and 0.5 (de Vet et al., Citation2011; Mokkink, Prinsen, et al., Citation2018).

Data synthesis

The results of available studies for each psychometric property were pooled or summarized for each tool to determine the overall ratings as per COSMIN (Mokkink, de Vet, et al., Citation2018). If the results of all the studies were consistent, then the results were pooled to give an overall rating of either “sufficient” (+) or “insufficient” (-). If the results were inconsistent (for example, both sufficient and insufficient results were found), then three strategies were used: (1) the conclusions were based on the majority of consistent results (if >75% were consistent); (2) explanations for the inconsistencies were explored and if found, were summarized per subgroup, or; (3) an overall rating of “inconsistent” was given. Refer to for a flow chart of how the ratings were assigned for each psychometric property of a tool.

Figure 1. Decision flowchart for rating the overall psychometric property for each assessment tool, according to COSMIN.

Decision flowchart for rating overall psychometric properties for each tool as either sufficient, insufficient, or inconsistent.
Figure 1. Decision flowchart for rating the overall psychometric property for each assessment tool, according to COSMIN.

The overall quality of evidence for each measurement property of a tool was then collated across the studies and summarized according to the modified GRADE approach as either (Mokkink, de Vet, et al., Citation2018):

  • High – confident that the measurement property lies close to the estimate (pooled or summarized result).

  • Moderate – moderate confidence in the measurement property estimate. It is likely to be close to the estimate (pooled or summarized result), but there is a possibility that it is substantially different.

  • Low – limited confidence in the measurement property estimate. The true measurement property may be substantially different from the estimate.

  • Very Low – very little confidence in the measurement property estimate. The true measurement property is likely to be substantially different from the estimate.

As uni-dimensionality is a prerequisite for internal consistency analyses, the quality of the evidence for structural validity was assessed first and taken as a starting point for interpreting the quality of the evidence for internal consistency (Mokkink, Prinsen, et al., Citation2018). For example, if the structural validity for a tool was rated “moderate”, then this was the starting point for rating internal consistency of that tool (rather than starting from “high”) and the quality was downgraded accordingly.

The risk of bias ratings and ratings for each measurement property were charted using the COSMIN online tools (see https://www.cosmin.nl/tools/guideline-conducting-systematic-review-outcome-measures/) (Terwee and Mokkink).

Results

This systematic review included 165 studies after the systematic search and screening process. For details refer to for the search and screening flowchart. A total of 12,974 people with stroke (Mean: 79; SD: 102), and 381 healthy controls were recruited to the included studies, from 29 different countries. The mean age of participants ranged from 55.1–77 years. presents a description of the 33 assessment tools, alternative versions and scoring methods.

Figure 2. Search and screening flowchart.

Flowchart depicting how many records were identified from each database and pearling, number of duplicates removed, number of abstracts and full texts screened for eligibility, and the number of articles included and excluded at each stage.
Figure 2. Search and screening flowchart.

Table 2. Description of the assessment tools, alternative versions and methods of scoring.

365 psychometric properties were extracted from the 165 included studies. The risk of bias of all 365 psychometric properties was also evaluated according to the COSMIN guidelines (Mokkink, Prinsen, et al., Citation2018). The majority of the studies (n = 95) reported multiple psychometric properties (e.g., reliability, measurement error and known groups validity), and sometimes for multiple assessment tools. The most frequently reported type of psychometric property was hypothesis testing for construct validity, with 222 reported within the 164 studies.

A summary of the findings including the overall ratings for each measurement property for each tool and the overall quality of the evidence as per the modified GRADE approach is outlined in . No tools showed full evidence for all eight measurement properties. Twelve tools showed some evidence of sufficient reliability, according to COSMIN criteria. However, none of the 33 tools had high quality evidence of sufficient reliability, only one tool had moderate evidence (Star Cancellation), whereas the others had either low or very low evidence.

Table 3. Summary of findings: describes the overall good measurement properties ratings and modified GRADE criteria for each psychometric property of the spatial neglect tools, derived from the pooled results from each study as per Online Resource 4.

The assessment tools with at least sufficient evidence of both validity and reliability are presented in according to the neglect subtype(s) for each tool as per the scoping review categorisations (Williams et al., Citation2021). There were 23 included tools with some evidence of reliability. However, 20 of these tools (87%) had only “low” or “very low” quality evidence. For construct validity, 22 tools had some evidence of sufficient construct validity, however only nine of these tools had high quality evidence: (1) Behavioural Inattention Test – behavioural and conventional subtests; (2) BIT Star Cancellation Test; (3) BIT Line Bisection; (4) Schenkenberg Line Bisection (5) Bells Test (6) Baking Tray Task (7) Catherine Bergego Scale (CBS); (8) CBS via Kessler Foundation Neglect Assessment Process (KF-NAP); (9) Mobility Assessment Course.

Most of the included studies only used the conventional subscale (a composite of 6 subtests) of the Behavioural Inattention Test (BIT) (Wilson et al., Citation1987). The conventional subscale alone demonstrated sufficient validity and reliability (see ), however there was only “very low” evidence for reliability. The whole BIT battery, and the behavioural subscale (of 9 subtests) in isolation have not demonstrated sufficient reliability from the included studies.

Regarding the utility of the assessment tools (whether the tool specifies the neglect severity and subtypes of neglect identified): Nine tools did not state the subtype of neglect – simply stating the tool identified spatial neglect. 24 of the 33 tools stated the subtype of neglect, however, these 24 tools did not indicate all relevant subtypes, according to our research team. For example, many cancellation tools state they identify visual neglect (not mentioning peripersonal neglect or that visual neglect cannot be disentangled from motor neglect with these tools). Therefore, a column was added in on the neglect subtypes for each tool, taken from a previously published scoping review that systematically identified the neglect subtypes of all spatial neglect tools (Williams et al., Citation2021).

Fourteen studies reported on some aspect of feasibility (completion time, resources and training required), with six studies reporting the average completion time and six studies reporting the percentage of the sample that was able to complete the tool. Therefore, some of the timings in are approximations, which were sourced from a combination of the clinical experience of the authors and the stroke engine website (strokengine.ca). One study reported carers’ comments of the acceptability of the tool by the stroke survivors, and one study asked the stroke survivors themselves about their satisfaction and ability to understand the instructions.

No evidence was found on the measurement invariance or the cross-cultural validity of the included tools. Additionally, no tool showed evidence of sufficient internal consistency, with 90% (9/10) of tools not demonstrating sufficient structural validity (a prerequisite for the evaluation of internal consistency) (See ). A summary of the characteristics of each psychometric property extracted from each study, the summarized results and the risk of bias are presented in the Online Resource 4.

Discussion

Accurate and reliable identification of spatial neglect can be the essential gateway to effective intervention. Therefore, this systematic review critically appraised the evidence for the psychometric properties of the 33 tools most often used for identifying spatial neglect in clinical practice and research. We reviewed 164 studies aimed to identify the assessment tools with the best psychometric properties, categorized per subtype of neglect. However, not enough evidence was available to strongly recommend one assessment tool over another, as the general state of the evidence was poor, or absent for some subtypes such as motor, tactile, auditory or representational neglect. There is no one tool that has sufficient high-quality evidence of both validity and reliability. There are many gaps in the summary tables where no evidence was available, for example 21 tools have no evidence of measurement error reported. Even when the evidence exists, the majority (66%, n = 63) of the psychometric properties were rated as either low or very low quality (Refer to ). This paper describes the best available evidence, then identifies the gaps in the evidence and makes recommendations for improvements through future research.

Tools with the best available evidence

specifies the eleven assessment tools with the best available evidence, therefore it is recommended that clinicians and researchers refer to this table to guide their decisions on which tools to use. However, the quality of the evidence should be taken into account, as labels of “low” or “very low” quality evidence suggests that the true psychometric property may be substantially different to what is reported (Mokkink, Prinsen, et al., Citation2018). Other considerations such as the completion time, being able to access the tool and cost of each tool have been integrated into as these may further guide clinicians and researchers on which tools to use.

Table 4. Tools with the best available evidence of both validity and reliability according to the COSMIN criteria for good measurement properties, categorized by neglect subtype.

Gaps in the evidence

highlights the lack of tools with sufficient validity and reliability for motor, tactile, auditory and representational neglect. One of the main limitations with most tools is the inability to differentiate motor from visual neglect, as is the case with all pen and paper cancellation tasks (Husain et al., Citation2000; Làdavas, Citation1994; Plummer et al., Citation2003). These tasks are unable to differentiate visual and motor neglect as they require viewing visual information across the page along with a motor response with the hand to cross out or draw items on the contralesional side (Husain et al., Citation2000). Previous studies have been unable to consistently classify visual and motor neglect across timepoints (Hamilton et al., Citation2008; Harvey & Olk, Citation2004). It is important to distinguish visual from motor neglect as they may respond differently to interventions and may have different recovery profiles (Barrett & Burkholder, Citation2006). The use of accelerometers worn on the wrist has shown promise for objectively measuring motor neglect by recording differences in spontaneous use of both limbs despite normal strength (Toba et al., Citation2021), however the reliability, validity and utility of this approach in clinical practice has not yet been explored.

It is surprising that none of the included tools had high-quality evidence of reliability. Reliability is essential to determine if a change in neglect scores reflects real change of neglect signs or other factors, such as measurement error. The true reliability of the included tools may be substantially different to what is reported due to potential sources of bias such as missing information or not stating if the assessors were blinded (50%, n = 25), not including information on the sampling methods or inclusion and exclusion criteria (40%, n = 20), or not stating the time between measurements (4%, n = 2). A possible explanation for the missing information is that studies included data on psychometrics of a tool without it being the focus of the study.

The whole BIT battery, and the behavioural subscale (of 9 subtests) in isolation has not demonstrated sufficient reliability from the included studies. There was very little evidence for the whole BIT battery as the majority of studies used only the conventional scale and very few studies also used the behavioural subscale of the BIT. This is possibly due to some elements of the behavioural scale being outdated such as the use of an old dial telephone.

Reliability of neglect tools in this population can be difficult to evaluate due to the varying presentations of neglect over time (Anderson et al., Citation2000; Woods et al., Citation2017). For example, Hamilton et al.’s (Citation2008) study investigating the subtype-defined patterns of performance over time, found inconsistent patterns of performance, with 61% of participants demonstrating a change of neglect subtypes, that was not consistent with spatial neglect improvement. However, the reliability and validity of the tools used in Hamilton et al.’s (Citation2008) study were not evaluated; therefore, it cannot be determined if this was due to a real change in neglect signs, or other factors such as measurement error.

Line cancellation (Albert, Citation1973; Wilson et al., Citation1987) was the only tool with inconsistent evidence of construct validity. There was conflicting evidence that this tool was correlated with other neglect measures, which could mean that it does not consistently measure the construct it purports to measure (Mokkink, Prinsen, et al., Citation2018). The inconsistency could not be explained by further inspection of the data, as there were no patterns with the type of tools that were correlated or not correlated. A potential reason may be that this is the only cancellation task that does not require selective attention due to the absence of distractor items. This is consistent with previous research that task-related factors such as the number of distractor stimuli and the type of response required can adversely impact spatial exploration (Sarri et al., Citation2009). This means that line cancellation is relatively insensitive to mild or moderate forms of spatial neglect in comparison to other cancellation tasks such as the star cancellation test (Halligan et al., Citation1990) or the Bells Test (Gauthier et al., Citation1989). Therefore, line cancellation may not be correlated with other neglect tools if the sample includes a significant proportion of mild or moderate cases of spatial neglect.

It is important for screening tools to have high sensitivity so that potential cases of spatial neglect are identified for further diagnostic assessment (Grech et al., Citation2017). It was not possible to directly compare the sensitivity of the included tools in this review as there is no gold standard or reference standard to compare against. Therefore many studies either used a clinical judgement for the criterion scores for example (Grech et al., Citation2017; Rengachary et al., Citation2009), or compared to healthy control participants for example (Nurmi et al., Citation2010), or compared the sensitivity to the other tools in the study for example (Azouvi et al., Citation2002; Moore et al., Citation2018). Sensitivity analyses are constrained by the quality of the criterion scores; therefore, only validated tools should be used. Consensus on a gold standard is urgently needed, which may not be a single test, but a composite of cancellation and drawing tests and more naturalistic observation of daily tasks, that tap into different presentation/subtypes of neglect. This is in line with surveys of current practice, which have identified cancellation and drawing tests as the most popular cognitive tests to identify spatial neglect in clinical practice, along with unstructured functional observation (Checketts et al., Citation2021; Evald et al., Citation2021; Menon-Nair et al., Citation2007).

Utility and feasibility

We have summarized the limited information on the utility and feasibility of all the tools in . The time required to complete each assessment tool was only mentioned in a few studies with no information on the feasibility or interpretability of completing or scoring the tools for clinicians. Although not a focus of this study, it was noted that there was no information in the included studies on whether the tools are acceptable and meaningful for stroke survivors themselves. Further research is needed to gain the perspectives of clinicians and stroke survivors on what factors are important for them.

Strengths and limitations of this review

This review guides clinicians and researchers on the psychometrics of commonly used spatial neglect tools, which has not been summarized since the last review 19 years ago (Menon & Korner-Bitensky, Citation2004). In 2004, only 62 standardized and non-standardized spatial neglect tools existed, compared to almost 300 tools which exist today. This study is the first to report on the quality of the psychometric evidence for spatial neglect tools, as standards for evaluating the evidence such as COSMIN (Mokkink et al., Citation2020) have only recently been developed. It was outside the scope of this review to search any grey literature (such as test manuals) or contact authors to obtain data not reported due to potential reporting or publication biases. Only peer reviewed studies were included in this review however, this was not considered a limitation due to the potential bias of reporting in test manuals from the lack of peer review.

Why do we not have stronger psychometrics?

It is concerning that we do not have stronger psychometrics, considering the high proportion of stroke survivors with spatial neglect, and the plethora of neglect research that has been completed across multiple discipline areas. This may be due to many reasons, such as: the complex and varying nature of the condition itself, leading to current assessment tools not meeting the needs of researchers and clinicians, new tools being developed each year, and use of unvalidated batteries. Additionally, the COSMIN standards have only recently been developed, and many studies include statistical approaches that are not recognized as “acceptable”, such as Pearson correlations for evidence of reliability. There was also huge variability in the sample sizes, with almost a third of the included studies with very small samples (50 studies had under 30 participants), and another 23 studies were excluded due to having fewer than 5 participants. The small sample sizes are possibly the result of research being completed by individual centres. Moving forward it will be important to foster collaboration, enabling larger-scale studies and the development of a consensus on the most appropriate tools for screening and assessing spatial neglect.

Implications for research

High quality research studies on the validity and reliability of spatial neglect assessment tools guided by the COSMIN methodology is needed. Direct comparisons of the validity and reliability of assessment tools is required so that the best tools for each neglect subtype can be identified. The tools with the best evidence to date are summarized in . Notably, there are no tools with sufficient evidence for identifying tactile, auditory, motor or representational neglect. Therefore, it is essential for further research to focus on confirming the best tools for these subtypes of neglect (see ).

This study has categorized the assessment tools according to the subtypes of neglect that they assess. However, assessing all neglect subtypes in clinical practice may not be feasible or necessary. Further research should aim to get consensus on what subtypes of neglect are important to assess consistently, and the best screening and diagnostic methods for doing this across clinical practice settings. Once consensus is achieved the recommended battery of tools should be validated. Many of the included papers used a combination of individual tools, collated specifically for their individual studies, rather than a validated battery. Additionally, the tools being used to screen for spatial neglect, are often also used for the comprehensive assessment of neglect (albeit, often combined with several other tools), then those same tools are also being used as outcome measures (Checketts et al., Citation2021; Grech et al., Citation2017; Williams et al., Citation2021). Ideally the tools used for screening should be distinct from the tools used for diagnosis and then outcome measures. However, we are a long way off from achieving consensus on all of these aspects.

It would be essential for reliability studies of neglect tools to ensure consistency in testing conditions for repeated measures, specifically for (1) time of day, (2) testing environment, including the location of the assessor (3) administration procedures, prompts and instructions, ensuring the tool is assessed at the same point in the session each time, and (4) stroke survivors perceived level of fatigue prior to each test. Additionally, the time between measurements needs to be carefully planned, with consideration for time post injury. Determining how stable neglect is for each of the participants prior to a test-retest reliability study would help in determining the optimal time between measurements. Additionally, it is important for study samples to be representative of the whole stroke population with assessment tools that account for other frequently co-existing stroke sequelae such as aphasia.

Computer-based tools such as eye-tracking, reaction time tasks, and virtual reality tools that are highly sensitive to mild neglect signs have the potential to address some of the limitations with tools used in clinical practice. However, many of these tools are not available or accessible for most clinical practice settings. Further research should focus on the feasibility of integrating low-cost options of virtual reality and computer-based tools into clinical practice.

Implications for clinicians

This review has not found a single tool or battery with high-quality evidence of sufficient psychometric properties. For now, we recommend that clinicians use a battery of tools for identifying neglect rather than relying on one tool. This is consistent with previous research recommending the use of multiple tools for the diagnosis of neglect (Azouvi et al., Citation2006). The shortlist of 11 tools categorized by subtype outlined in have the best psychometrics and should be used as a guide for clinicians. Clinicians can refer to for a description of each tool, scoring, completion time, and neglect subtypes identified to further inform their choice. It is important for clinicians to screen for a variety of neglect subtypes with standardized tools using a combination of cognitive tests such as cancellation and drawing tasks, along with more naturalistic observation of daily tasks with tools such as the Catherine Bergego Scale (CBS) (Azouvi, Citation1996; Chen et al., Citation2012) and the mobility assessment course (Ten Brink et al., Citation2018; Verlander et al., Citation2000). Line cancellation is not recommended for routine use with all stroke survivors as it has not shown to be a valid assessment of neglect.

Conclusion

This systematic review critically appraised the evidence for the psychometric properties of 33 commonly used assessment tools for identifying spatial neglect in clinical practice. The general quality of the evidence was poor and no one tool had high-quality evidence of both validity and reliability. Eleven tools show some promise as they meet the minimum criteria for validity and reliability. However, due to the low-quality of the reliability evidence, the true psychometric property may be considerably different to what is reported. Further high-quality studies using the COSMIN standards as a guide are needed to compare the psychometric properties of multiple neglect assessment tools so that variations can be attributed to the test, rather than other factors related to the study sample. It is essential to use tools with sufficient reliability when determining if interventions are effective, as a change of neglect scores doesn’t necessarily reflect a real improvement of neglect signs if the reliability or measurement error of the tool is unknown.

Competing interests

KH is an author of KF-NAP (one of the tools that was included in this review). KH did not extract, or assess the measurement property/Risk of bias for any studies relating to the KF-NAP or CBS. The other authors have no competing interests to declare that are relevant to the content of this article.

Authors’ contributions

Conceptualization: Lindy Williams, Tobias Loetscher, Jocelyn Kernot, Susan Hillier, Audrey Bowen; Literature search and screening: Lindy Williams, Tobias Loetscher, Jocelyn Kernot, Susan Hillier, Jennifer Jones; Data extraction and analysis: Lindy Williams, Tobias Loetscher, Jocelyn Kernot, Susan Hillier, Jennifer Jones, Kimberly Hreha; Writing – original draft preparation: Lindy Williams; Writing – review and editing: Lindy Williams, Tobias Loetscher, Jocelyn Kernot, Susan Hillier, Audrey Bowen, Kimberly Hreha, Jennifer Jones.

Supplemental material

Online Resource 2. Full search strategy.docx

Download MS Word (56.1 KB)

Online Resource 4 - Psychometrics and quality ratings.docx

Download MS Word (3 MB)

Online Resource 3. Data items.docx

Download MS Word (21.5 KB)

Online Resource 1. Identification of tools..docx

Download MS Word (42.2 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data Availability statement

The data that support the findings of this study are available from the corresponding author upon request.

Additional information

Funding

This work was supported by an Australian Government Research Training Program Scholarship.

References

  • Albert, M. L. (1973). A simple test of visual neglect. Neurology, 23(6), 658–664. https://doi.org/10.1212/WNL.23.6.658
  • Aldridge, V. K., Dovey, T. M., & Wade, A. (2017, October). Assessing test-retest reliability of psychological measures. European Psychologist, 22(4), 207–218. https://doi.org/10.1027/1016-9040/a000298
  • Anderson, B., Mennemeier, M., & Chatterjee, A. (2000, June). Variability not ability: Another basis for performance decrements in neglect. Neuropsychologia, 38(6), 785–796. https://doi.org/10.1016/S0028-3932(99)00137-2
  • Azouvi, P. (1996). Functional consequences and awareness of unilateral neglect: Study of an evaluation scale. Neuropsychological Rehabilitation, 6(2), 133–150. https://doi.org/10.1080/713755501
  • Azouvi, P., Bartolomeo, P., Beis, J. M., Perennou, D., Pradat-Diehl, P., & Rousseaux, M. (2006). A battery of tests for the quantitative assessment of unilateral neglect. Restorative Neurology and Neuroscience, 24(4–6), 273–285.
  • Azouvi, P., Samuel, C., Louis-Dreyfus, A., Bernati, T., Bartolomeo, P., Beis, J. M., Chokron, S., Leclercq, M., Marchal, F., Martin, Y., De Montety, G., Olivier, S., Perennou, D., Pradat-Diehl, P., Prairial, C., Rode, G., Siéroff, E., Wiart, L., Rousseaux, M., & French Collaborative Study Group on Assessment of Unilateral Neglect (GEREN/GRECO). (2002). Sensitivity of clinical and behavioural tests of spatial neglect after right hemisphere stroke. Journal of Neurology, Neurosurgery & Psychiatry, 73(2), 160–166. https://doi.org/10.1136/jnnp.73.2.160
  • Bailey, M. J., Riddoch, M. J., & Crome, P. (2000). Evaluation of a test battery for hemineglect in elderly stroke patients for use by therapists in clinical practice. NeuroRehabilitation, 14(3), 139–150.
  • Bailey, M. J., Riddoch, M., & Crome, P. (2004). Test-retest stability of three tests for unilateral visual neglect in patients with stroke: Star Cancellation, Line Bisection, and the Baking Tray Task. Neuropsychological Rehabilitation, 14(4), 403–419. https://doi.org/10.1080/09602010343000282
  • Barrett, A. M., & Burkholder, S. (2006). Monocular patching in subjects with right-hemisphere stroke affects Monocular patching in subjects with right-hemisphere stroke affects. The Journal of Rehabilitation Research and Development, 43(3), 337–345. https://doi.org/10.1682/JRRD.2005.01.0015
  • Bickerton, W. L., Samson, D., Williamson, J., & Humphreys, G. W. (2011). Separating forms of neglect using the Apples Test: Validation and functional prediction in chronic and acute stroke. Neuropsychology, 25(5), 567–580. https://doi.org/10.1037/a0023501
  • Bisiach, E., Ricci, R., Lualdi, M., & Colombo, M. R. (1998). Perceptual and response bias in unilateral neglect: Two modified versions of the Milner landmark task. Brain & Cognition, 37(3), 369–386. https://doi.org/10.1006/brcg.1998.1003
  • Bonato, M., Priftis, K., Umilta, C., & Zorzi, M. (2013). Computer-based attention-demanding testing unveils severe neglect in apparently intact patients. Behavioural Neurology, 26(3), 179–181. https://doi.org/10.1155/2013/139812
  • Bowen, A., Hazelton, C., Pollock, A., & Lincoln, N. B. (2013). Cognitive rehabilitation for spatial neglect following stroke. Cochrane Database of Systematic Reviews, 2013(7), CD003586. https://doi.org/10.1002/14651858.CD003586.pub3
  • Caplan, B. (1987). Assessment of unilateral neglect: A new reading test. Journal of Clinical and Experimental Neuropsychology, 9(4), 359–364. https://doi.org/10.1080/01688638708405056
  • Checketts, M., Mancuso, M., Fordell, H., Chen, P., Hreha, K., Eskes, G. A., Vuilleumier, P., Vail, A., & Bowen, A. (2021, October). Current clinical practice in the screening and diagnosis of spatial neglect post-stroke: Findings from a multidisciplinary international survey. Neuropsychological Rehabilitation, 31(9), 1495–1526. https://doi.org/10.1080/09602011.2020.1782946
  • Chen, P., Hreha, K., Fortis, P., Goedert, K., & Barrett, A. (2012, January). Functional assessment of spatial neglect: A review of the Catherine Bergego scale and an introduction of the Kessler foundation neglect assessment process. Topics in Stroke Rehabilitation, 19(5), 423–435. https://doi.org/10.1310/tsr1905-423
  • Civelek, G. M., Atalay, A., & Turhan, N. (2015). Association of ideomotor apraxia with lesion site, etiology, neglect, and functional independence in patients with first ever stroke. Topics in Stroke Rehabilitation, 22(2), 94–101. https://doi.org/10.1179/1074935714Z.0000000027
  • Cocchini, G., Beschin, N., & Jehkonen, M. (2001). The Fluff Test: A simple task to assess body representation neglect. Neuropsychological Rehabilitation, 11(1), 17–31. https://doi.org/10.1080/09602010042000132
  • Cocchini, G., & Beschin, N. (2022). The Fluff test: Improved scoring system to account for different degrees of contralesional and ipsilesional personal neglect in brain damaged patients. Neuropsychological Rehabilitation, 32(1), 69–83. https://doi.org/10.1080/09602011.2020.1797828
  • Cunningham, L. J., O’Rourke, K., Finlay, C., & Gallagher, M. (2017). A preliminary investigation into the psychometric properties of the Dublin extrapersonal neglect assessment (DENA): A novel screening tool for extrapersonal neglect. Neuropsychological Rehabilitation, 27(3), 349–368. https://doi.org/10.1080/09602011.2015.1084334
  • Delazer, M., Sojer, M., Ellmerer, P., Boehme, C., & Benke, T. (2018). Eye-tracking provides a sensitive measure of exploration deficits after acute right MCA stroke. Frontiers in Neurology, 9, 359. https://doi.org/10.3389/fneur.2018.00359
  • Demeyere, N., & Gillebert, C. R. (2019, May). Ego- and allocentric visuospatial neglect: Dissociations, prevalence, and laterality in acute stroke. Neuropsychology, 33(4), 490–498. https://doi.org/10.1037/neu0000527
  • Demaerschalk, B. M., Vegunta, S., Vargas, B. B., Wu, Q., Channer, D. D., & Hentz, J. G. (2012). Reliability of real-time video smartphone for assessing national institutes of health stroke scale scores in acute stroke patients. Stroke, 43(12), 3271–3277. https://doi.org/10.1161/STROKEAHA.112.669150
  • De-Rosende-celeiro, I., Rey-Villamayor, A., Francisco-De-miguel, I., & Ávila-álvarez, A. (2021). Independence in daily activities after stroke among occupational therapy patients and its relationship with unilateral neglect. International Journal of Environmental Research and Public Health, 18(14). https://doi.org/10.3390/ijerph18147537
  • Diller, L., & Weinberg, J. (1977). Hemi-inattention in rehabilitation. The evolution of a rational remediation program. In E. A. Weinstein & R. P. Friedland (Eds.), Hemi-inattention and hemisphere specialization: Advances in neurology (pp. 62–82). New York: Raven Press.
  • de Vet, H. C. W., Terwee, C. B., Mokkink, L. B., & Knol, D. L. (2011). Measurment in medicine: A practical guide. Cambridge University Press.
  • Esposito, E., Shekhtman, G., & Chen, P. (2021). Prevalence of spatial neglect post-stroke: A systematic review. Annals of Physical and Rehabilitation Medicine, 64(5), 101459. https://doi.org/10.1016/j.rehab.2020.10.010
  • Evald, L., Wilms, I., & Nordfang, M. (2021). Assessment of spatial neglect in clinical practice: A nationwide survey. Neuropsychological Rehabilitation, 31(9), 1374–1389. https://doi.org/10.1080/09602011.2020.1778490
  • Fordell, H., Bodin, K., Bucht, G., & Malm, J. (2011). A virtual reality test battery for assessment and screening of spatial neglect. Acta Neurologica Scandinavica, 123(3), 167–174. https://doi.org/10.1111/j.1600-0404.2010.01390.x
  • Gauthier, L., Dehaut, F., & Joanette, Y. (1989). The bells test: A quantitative and qualitative test for visual neglect. International Journal of Clinical Neuropsychology, 11(2), 49–54.
  • Gerafi, J., Samuelsson, H., Viken, J. I., Blomgren, C., Claesson, L., Kallio, S., Jern, C., Blomstrand, C., & Jood, K. (2017). Neglect and aphasia in the acute phase as predictors of functional outcome 7 years after ischemic stroke. European Journal of Neurology, 24(11), 1407–1415. https://doi.org/10.1111/ene.13406
  • Grech, M., Stuart, T., Williams, L., Chen, C., & Loetscher, T. (2017, October). The mobility assessment course for the diagnosis of spatial neglect: Taking a step forward? Frontiers in Neurology, 8, 563. https://doi.org/10.3389/fneur.2017.00563
  • Halligan, P., Wilson, B., & Cockburn, J. (1990). A short screening test for visual neglect in stroke patients. International Disability Studies, 12(3), 95–99. https://doi.org/10.3109/03790799009166260
  • Halligan, P. W., Cockburn, J., & Wilson, B. A. (1991). The behavioural assessment of visual neglect. Neuropsychological Rehabilitation, 1(1), 5–32. https://doi.org/10.1080/09602019108401377
  • Halligan, P. W., Burn, J. P., Marshall, J. C., & Wade, D. T. (1992). Visuo-spatial neglect: Qualitative differences and laterality of cerebral lesion. Journal of Neurology, Neurosurgery, and Psychiatry, 55(11), 1060–1068.
  • Hamilton, R. H., Coslett, H. B., Buxbaum, L. J., Whyte, J., & Ferraro, M. K. (2008). Inconsistency of performance on neglect subtype tests following acute right hemisphere stroke. Journal of the International Neuropsychological Society, 14(1), 23–32. https://doi.org/10.1017/S1355617708080077
  • Harvey, M., & Olk, B. (2004). Comparison of the Milner and Bisiach Landmark tasks: Can neglect patients be classified consistently? Cortex, 40(4-5), 659–665. https://doi.org/10.1016/S0010-9452(08)70162-X
  • Heilman, K. M., Watson, R. T., & Valenstein, E. (2003). Neglect and related disorders. In K. M. Heilman & E. Valenstein (Eds.), Clinical neuropsychology (4th ed., pp. 296–346). Oxford University Press.
  • Husain, M., Mattingley, J. B., Rorden, C., Kennard, C., & Driver, J. (2000). Distinguishing sensory and motor biases in parietal and frontal neglect. Brain, 123(8), 1643–1659. https://doi.org/10.1093/brain/123.8.1643
  • Kaufmann, B. C., Cazzoli, D., Pflugshaupt, T., Bohlhalter, S., Vanbellingen, T., Müri, R. M., Nef, T., & Nyffeler, T. (2020). Eyetracking during free visual exploration detects neglect more reliably than paper-pencil tests. Cortex, 129, 223–235. https://doi.org/10.1016/j.cortex.2020.04.021
  • Kamtchum-Tatuene, J., Allali, G., Saj, A., Bernati, T., Sztajzel, R., Pollak, P., Momjian-Mayor, I., & Kleinschmidt, A. (2017). An exploratory cohort study of sensory extinction in acute stroke: Prevalence, risk factors, and time course. Journal of Neural Transmission, 124(4), 483–494. https://doi.org/10.1007/s00702-016-1663-x
  • Kerkhoff, G., & Schenk, T. (2012). Rehabilitation of neglect: An update. Neuropsychologia, 50(6), 1072–1079. https://doi.org/10.1016/j.neuropsychologia.2012.01.024
  • Kettunen, J. E., Nurmi, M., Dastidar, P., & Jehkonen, M. (2012). Recovery from visual neglect after right hemisphere stroke: Does starting point in cancellation tasks change after 6 months? The Clinical Neuropsychologist, 26(2), 305–320. https://doi.org/10.1080/13854046.2011.648213
  • Kim, T.-L., Kim, K., Choi, C., Lee, J.-Y., & Shin, J.-H. (2021). FOPR test: A virtual reality-based technique to assess field of perception and field of regard in hemispatial neglect. Journal of NeuroEngineering and Rehabilitation, 18(1). https://doi.org/10.1186/s12984-021-00835-1
  • Klinke, M. E., Hjaltason, H., Tryggvadóttir, G. B., & Jónsdóttir, H. (2018). Hemispatial neglect following right hemisphere stroke: Clinical course and sensitivity of diagnostic tasks. Topics in Stroke Rehabilitation, 25(2), 120–130. https://doi.org/10.1080/10749357.2017.1394632
  • Làdavas, E. (1994). The role of visual attention in neglect: A dissociation between perceptual and directional motor neglect. Neuropsychological Rehabilitation, 4(2), 155–159. https://doi.org/10.1080/09602019408402275
  • Lee, B. H., Kang, S. J., Park, J. M., Son, Y., Lee, K. H., Adair, J. C., Heilman, K. M., & Na, D. L. (2004). The character-line bisection task: A new test for hemispatial neglect. Neuropsychologia, 42(12), 1715–1724. https://doi.org/10.1016/j.neuropsychologia.2004.02.015
  • Leibovitch, F. S., Vasquez, B. P., Ebert, P. L., Beresford, K. L., & Black, S. E. (2012). A short bedside battery for visuoconstructive hemispatial neglect: Sunnybrook neglect assessment procedure (SNAP). Journal of Clinical and Experimental Neuropsychology, 34(4), 359–368. https://doi.org/10.1080/13803395.2011.645016
  • Luukkainen-Markkula, R., Tarkka, I. M., Pitkanen, K., Sivenius, J., & Hamalainen, H. (2011). Comparison of the behavioural inattention test and the Catherine Bergego scale in assessment of hemispatial neglect. Neuropsychological Rehabilitation, 21(1), 103–116. https://doi.org/10.1080/09602011.2010.531619
  • Marshall, J. C., & Halligan, P. W. (1993). Visuo-spatial neglect: A new copying test to assess perceptual parsing. Journal of Neurology, 240(1), 37–40. https://doi.org/10.1007/BF00838444
  • McIntosh, R. D., Brodie, E. E., Beschin, N., & Robertson, I. H. (2000). Improving the clinical diagnosis of personal neglect: A reformulated comb and razor test. Cortex, 36(2), 289–292. https://doi.org/10.1016/s0010-9452(08)70530-6
  • Menon, A., & Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41–66. https://doi.org/10.1310/KQWL-3HQL-4KNM-5F4U
  • Menon-Nair, A., Korner-Bitensky, N., & Ogourtsova, T. (2007, September). Occupational therapists’ identification, assessment, and treatment of unilateral spatial neglect during stroke rehabilitation in Canada. Stroke, 38(9), 2556–2562. https://doi.org/10.1161/STROKEAHA.107.484857
  • Mokkink, L. B., et al. (2020, December). COSMIN Risk of Bias tool to assess the quality of studies on reliability or measurement error of outcome measurement instruments: A Delphi study. BMC Medical Research Methodology, 20(1), 293. https://doi.org/10.1186/s12874-020-01179-5
  • Mokkink, L. B., De Vet, H. C. W., Prinsen, C. A. C., Patrick, D. L., Alonso, J., Bouter, L. M., & Terwee, C. B., (2018). COSMIN risk of bias checklist. Retrieved March 6, 2023, from https://www.cosmin.nl/tools/checklists-assessing-methodological-study-qualities/
  • Mokkink, L. B., de Vet, H. C. W., Prinsen, C. A. C., Patrick, D. L., Alonso, J., Bouter, L. M., & Terwee, C. B. (2018). Cosmin risk of bias checklist for systematic reviews of patient-reported outcome measures. Quality of Life Research, 27(5), 1171–1179. https://doi.org/10.1007/s11136-017-1765-4
  • Mokkink, L. B., Prinsen, C. A., Patrick, D. L., Alonso, J., Bouter, L. M., de Vet, H. C., & Terwee, C. B. (2018). COSMIN methodology for systematic reviews of Patient-Reported Outcome Measures (PROMs) user manual. The COSMIN Initiative.
  • Montedoro, V., Alsamour, M., Dehem, S., Lejeune, T., Dehez, B., & Edwards, M. G. (2019). Robot diagnosis test for egocentric and allocentric hemineglect. Archives of Clinical Neuropsychology, 34(4), 481–494. https://doi.org/10.1093/arclin/acy062
  • Moore, M. J., Vancleef, K., Shalev, N., & Demeyere, N. (2018, December). When neglect is neglected: NIHSS observational measure lacks sensitivity in identifying post- stroke hemispatial neglect. International Journal of Stroke, 13, 21–21.
  • Nelemans, K. N., Nijboer, T. C. W., & Ten Brink, A. F. (2022). The mobility assessment course: A ready-to-use dynamic measure of visuospatial neglect. Journal of Neuropsychology, 16(3), 498–517. https://doi.org/10.1111/jnp.12277
  • Nurmi, L., Kettunen, J., Laihosalo, M., Ruuskanen, E.-I., Koivisto, A.-M., & Jehkonen, M. (2010, September). Right hemisphere infarct patients and healthy controls: Evaluation of starting points in cancellation tasks. Journal of the International Neuropsychological Society: JINS, 16(5), 902–909. https://doi.org/10.1017/S1355617710000792
  • Ogden, J. A. (1985). Anterior-posterior interhemispheric differences in the loci of lesions producing visual hemineglect. Brain and Cognition, 4(1), 59–75. https://doi.org/10.1016/0278-2626(85)90054-5
  • Ortigue, S., Mégevand, P., Perren, F., Landis, T., & Blanke, O. (May 2006). Double dissociation between representational personal and extrapersonal neglect. Neurology, 66(9), 1414–1417. https://doi.org/10.1212/01.wnl.0000210440.49932.e7
  • Ota, H., Fujii, T., Suzuki, K., Fukatsu, R., & Yamadori, A. (2001). Dissociation of body-centered and stimulus-centered representations in unilateral neglect. Neurology, 57(11), 2064–2069. https://doi.org/10.1212/wnl.57.11.2064
  • Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
  • Pedroli, E., Serino, S., Cipresso, P., Pallavicini, F., & Riva, G. (2015). Assessment and rehabilitation of neglect using virtual reality: A systematic review. Frontiers in Behavioral Neuroscience, 9, 226. https://doi.org/10.3389/fnbeh.2015.00226
  • Pflugshaupt, T., Bopp, S. A., Heinemann, D., Mosimann, U. P., von Wartburg, R., Nyffeler, T., Hess, C. W., & Müri, R. M. (2004). Residual oculomotor and exploratory deficits in patients with recovered hemineglect. Neuropsychologia, 42(9), 1203–1211. https://doi.org/10.1016/j.neuropsychologia.2004.02.002
  • Plummer, P., Morris, M. E., & Dunai, J. (2003, August). Assessment of unilateral neglect. Physical Therapy, 83(8), 732–740. https://doi.org/10.1093/ptj/83.8.732
  • Puig-Pijoan, A., Giralt-Steinhauer, E., Zabalza de Torres, A., Manero Borràs, R. M., Sánchez-Benavides, G., García Escobar, G., Pérez Enríquez, C., Gómez-González, A., Ois, Á., Rodríguez-Campello, A., Cuadrado-Godía, E., Jiménez-Conde, J., Peña-Casanova, J., & Roquer, J. (2018). Underdiagnosis of unilateral spatial neglect in stroke unit. Acta Neurologica Scandinavica, 138(5), 441–446. https://doi.org/10.1111/ane.12998
  • Rengachary, J., d’Avossa, G., Sapir, A., Shulman, G. L., & Corbetta, M. (2009, December). Is the posner reaction time test more accurate than clinical tests in detecting left neglect in acute and chronic stroke? Archives of Physical Medicine & Rehabilitation, 90(12), 2081–2088. https://doi.org/10.1016/j.apmr.2009.07.014
  • Rorden, C., & Karnath, H.-O. (2010). A simple measure of neglect severity. Neuropsychologia, 48(9), 2758–2763. https://doi.org/10.1016/j.neuropsychologia.2010.04.018
  • Rousseaux, M., Allart, E., Bernati, T., & Saj, A. (2015). Anatomical and psychometric relationships of behavioral neglect in daily living. Neuropsychologia, 70, 64–70. https://doi.org/10.1016/j.neuropsychologia.2015.02.011
  • Saj, A., Honore, J., Bernati, T., & Rousseaux, M. (2012). Influence of spatial neglect, hemianopia and hemispace on the subjective vertical. European Neurology, 68(4), 240–246. https://doi.org/10.1159/000339266
  • Samuelsson, H., Hjelmquist, E., Naver, H., & Blomstrand, C. (1996). Visuospatial neglect and an ipsilesional bias during the start of performance in conventional tests of neglect. Clinical Neuropsychologist, 10(1), 15–24. https://doi.org/10.1080/13854049608406659
  • Sarri, M., Greenwood, R., Kalra, L., & Driver, J. (2009). Task-related modulation of visual neglect in cancellation tasks. Neuropsychologia, 47(1), 91–103. https://doi.org/10.1016/j.neuropsychologia.2008.08.020
  • Schenkenberg, T., Bradford, D. C., & Ajax, E. T. (1980). Line bisection and unilateral visual neglect in patients with neurologic impairment. Neurology, 30(5), 509. https://doi.org/10.1212/WNL.30.5.509
  • Spano, B., Nardo, D., Giulietti, G., Matano, A., Salsano, I., Briani, C., Vadala, R., Marzi, C., De Luca, M., Caltagirone, C., & Santangelo, V. (2022). Left egocentric neglect in early subacute right-stroke patients is related to damage of the superior longitudinal fasciculus. Brain Imaging and Behavior, 16(1), 211–218. https://doi.org/10.1007/s11682-021-00493-w
  • Stein, C., Bunker, L., Chu, B., Leigh, R., Faria, A., & Hillis, A. (2022). Various tests of left neglect are associated with distinct territories of hypoperfusion in acute stroke. Brain Communications, 4(2). https://doi.org/10.1093/braincomms/fcac064
  • Stone, S. P., Wilson, B., Wroot, A., Halligan, P. W., Lange, L., Marshall, J., & Greenwood, M. J. C. (1991). The assessment of visuo-spatial neglect after acute stroke. Journal of Neurology, Neurosurgery & Psychiatry, 54(4), 345–350. https://doi.org/10.1136/jnnp.54.4.345.
  • Ten Brink, A. F., Visser-Meily, J. M. A., & Nijboer, T. C. W. (2018). Dynamic assessment of visual neglect: The mobility assessment course as a diagnostic tool. Journal of Clinical and Experimental Neuropsychology, 40(2), 161–172. https://doi.org/10.1080/13803395.2017.1324562
  • Terwee, C. B., Jansma, E. P., Riphagen, I. I., & de Vet, H. C. W. (2009, October). Development of a methodological PubMed search filter for finding studies on measurement properties of measurement instruments. Quality of Life Research, 18(8), 1115–1123. https://doi.org/10.1007/s11136-009-9528-5
  • Terwee, C. B., & Mokkink, W. COSMIN: Help organising your risk of bias ratings. COSMIN Guideline for Systematic Reviews of Outcome Measurement Instruments. Retrieved January 11, 2023, from https://www.cosmin.nl/tools/guideline-conducting-systematic-review-outcome-measures/
  • Tham, K., & Tegner, R. (1996). The baking tray task: A test of spatial neglect. Neuropsychological Rehabilitation, 6(1), 19–26. https://doi.org/10.1080/713755496
  • Toba, M. N., Pagliari, C., Rabuffetti, M., Nighoghossian, N., Rode, G., Cotton, F., Spinazzola, L., Baglio, F., Migliaccio, R., & Bartolomeo, P. (2021, May). Quantitative assessment of motor neglect. Stroke, 52(5), 1618–1627. https://doi.org/10.1161/STROKEAHA.120.031949
  • Verlander, D., Hayes, A, McInnes, J. K., Liddle, G. W, Clarke, G. E., Clarke, M. S., Russell, M., Ferguson, W., & Walsh, P. G. (2000). Assessment of clients with visual spatial disorders: A pilot study. Visual Impairment Research, 2(3), 129–142. doi:10.1076/vimr.2.3.129.4422
  • Webster, J. S., Cottam, G., Gouvier, W. D., Blanton, P., Beissel, G. F., & Wofford, J. (1989). Wheelchair obstacle course performance in right cerebral vascular accident victims. Journal of Clinical and Experimental Neuropsychology, 11(2), 295–310.
  • Whitehouse, C. E., Green, J., Giles, S. M., Rahman, R., Coolican, J., & Eskes, G. A. (2019). Development of the Halifax visual scanning test: A new measure of visual-spatial neglect for personal, peripersonal, and extrapersonal space. Journal of the International Neuropsychological Society, 25(5), 490–500. https://doi.org/10.1017/S135561771900002X
  • Williams, L. J., Kernot, J., Hillier, S. L., & Loetscher, T. (2021). Spatial neglect subtypes, definitions and assessment tools: A scoping review. Frontiers in Neurology, 12, 742365. https://doi.org/10.3389/fneur.2021.742365.
  • Wilson, B., Cockburn, J., & Halligan, P. (1987). Development of a behavioral test of visuospatial neglect. Archives of Physical Medicine and Rehabilitation, 68(2), 98–102.
  • Woods, M., Williamson, J. B., White, K. D., Maitland, C. G., & Heilman, K. M. (2017). Shifting spatial neglect with repeated line bisections: Possible role of lateralized attentional fatigue. Cognitive and Behavioral Neurology, 30(1), 30–36. https://doi.org/10.1097/WNN.0000000000000118
  • Ye, L.-L., Xie, H.-X., Cao, L., & Song, W.-Q. (2022). Therapeutic effects of transcranial magnetic stimulation on visuospatial neglect revealed with event-related potentials. Frontiers in Neurology, 12. https://doi.org/10.3389/fneur.2021.799058
  • Zoccolotti, P., Antonucci, G., & Judica, A. (1992). Psychometric characteristics of two semi-structured scales for the functional evaluation of hemiinattention in extrapersonal and personal space. Neuropsychological Rehabilitation, 2(3), 179–191. https://doi.org/10.1080/09602019208401407
  • Zuverza-Chavarria, V., & Tsanadis, J. (2011). Measurement properties of the CLOX executive clock drawing task in an inpatient stroke rehabilitation setting. Rehabilitation Psychology, 56(2), 138–144. https://doi.org/10.1037/a0023465