Abstract
Throughout the world, tests are administered to some examinees who are not fully proficient in the language in which they are being tested. It has long been acknowledged that proficiency in the language in which a test is administered often affects examinees’ performance on a test. Depending on the context and intended uses for a particular assessment, linguistic proficiency may be relevant to the tested construct and subsequent interpretations, or may be a source of construct-irrelevant variance that undermines accurate interpretation of the test performance of linguistic minorities who are not proficient in the language of the assessment. In this article, we highlight key validity issues to be considered when testing linguistic minorities, regardless of whether language is central or construct-irrelevant. We discuss examples of the different types of studies test users and developers could conduct to evaluate the validity of scores of linguistic minorities. These issues span test development and validation activities. We conclude with a list of critical factors to consider in test development and evaluation whenever linguistic minorities are tested.
Notes
Individuals who are bilingual, meaning that they are proficient in both the testing language and another language, would not be considered linguistic minorities due to their high level of proficiency in the language of the assessment.
We use the term “translation” here, but in the cross-lingual assessment community the term “adaptation” is more common since it does not imply a literal word-for-word translation. Rather, adaptation in this context specifies translating the intended meaning of test material across languages without the constraints of a literal translation (Hambleton, Merenda, & Spielberger, 2005; International Test Commission, 2010).
However, LMs may be unfamiliar with using “bubble sheets” to record their answers and so training or practice may be important to reduce stress and recording errors.