1,303
Views
5
CrossRef citations to date
0
Altmetric
Articles

Investigating the Reliability of the Civics Component of the U.S. Naturalization Test

Pages 317-341 | Published online: 01 Dec 2011
 

Abstract

In this study, I investigated the reliability of the U.S. Naturalization Test's civics component by asking 414 individuals to take a mock U.S. citizenship test comprising civics test questions. Using an incomplete block design of six forms with 16 nonoverlapping items and four anchor items on each form (the anchors connected the six subsets of civics test items), I applied Rasch analysis to the data. The analysis estimated how difficult the items are, whether they are interchangeable, and how reliably they measure civics knowledge. In addition, I estimated how uniformly difficult the items are for noncitizens (N = 187) and citizens (N = 225) and how accurate the cutoff score is. Results demonstrated the items vary widely in difficulty and do not all reliably measure civics knowledge. Most items do not function differently for citizens and noncitizens. The cutoff is not as accurate as applied in the operational test. The data revealed that test scores contain construct-irrelevant variance that undermines the overall reliability and validity of the instrument. I discuss these results not only to better understand the civics test but also to recommend how United States Citizenship and Immigration Services could conduct a similar study with the goal of raising the reliability and validity of the test.

ACKNOWLEDGMENTS

I presented an earlier version of this article in 2010 at the Midwest Association of Language Testers conference in Dayton, Ohio. I thank the students in my graduate-level language testing class for their help with data collection and for their lively in-class discussions on the issues brought up in this article. I thank Spiros Papageorgiou, Ching-Ni Hsieh, and three anonymous LAQ reviewers for their comments. Any mistakes, however, are my own.

Notes

1According to the USICS website (http://www.uscis.gov), the application fee of $675 comprises a general fee of $595 plus a biometrics fee of $80. Applicants who are 75 years or older are not charged the biometric fee. Military applicants filing under Section 328 and 329 of the INA are not charged an application fee.

2USCIS is vague in describing how English proficiency is evaluated. There are three English proficiency tests—speaking, reading, and writing. Speaking is assessed by the USCIS officer over the course of the interview. For the reading test, the candidate must read out-loud one of three written English sentences. For the writing test, the candidate must correctly write down one of three dictated English sentences. Key vocabulary used in the reading and writing tests are provided as study materials by USCIS. Information on how the tests are scored is not provided. The official description of the English tests and the official USCIS study materials for the tests can be found on the USCIS website (http://www.uscis.gov) and particularly at the following website: http://www.uscis.gov/newtest.

3To my knowledge, no publicly available information explains how USCIS officers select a 10-question form or states whether their discretion to select a form is at all limited. No information is available as to form equality or composition.

4A 1998 Immigration and Naturalization Service–commissioned study found that out of 7,843 naturalization applications, 34% were denied due to failure on the English, the civics test, or both (del Valle, as cited in CitationKunnan, 2009, p. 113).

5As explained by CitationBrown (2005), an item's difficulty index is the percentage of test takers who correctly answered the item. An item's discrimination index is a statistic that indicates the degree to which the item separates the test takers who performed well overall on the test from those who performed poorly.

6An anonymous LAQ reviewer duly noted that this study's test takers may differ from the true test takers because “motivation to pass the test may affect engagement with the test materials and hence performance.” However, in rebuttal, test developers, out of practical needs, often pilot test items on experimental test takers who are similar to real test takers but whose scores are not used for decision-making purposes. Data from pilot testing are regularly used to develop norming criteria and/or to validate future uses of tests.

7To access the video, go to http://www.uscis.gov/citizenship and then look under Learners > Study For The Test > Study Materials For The English Test.

8When you look at the passing rates, the test appears to be working at the extremes ends of ability (the double-passes are mostly citizens, and the double-fails are mostly noncitizens), but almost any test discriminates at a fairly high level at the extremes. What is more difficult to achieve is a test that reliability and accurately measures the ability of test takers who possess average-skill levels. And this test does not do well in reliability measuring those with average civics knowledge. This is additionally problematic because the cutoff is at the average level of ability. The area of skill measurement in which the test may be the least reliable is where the test's responsibility in decision making resides.

9I am thankful to an anonymous LAQ reviewer for this argumentation.

10The formula for calculating the SEM is the following: where S is the standard deviation of the test and rxx' is the reliability estimate for the test. This formula and an explanation of SEM can be found on page 189 of CitationBrown (2005).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 232.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.