308
Views
1
CrossRef citations to date
0
Altmetric
Articles

Auditing for Score Inflation Using Self-Monitoring Assessments: Findings From Three Pilot Studies

, , , , &

References

  • Booher-Jennings, J. (2005). Below the bubble: “Educational Triage” and the Texas Accountability System. American Educational Research Journal, 42, 231–268.
  • Briggs, D. C. (2009). Preparation for college admission exams. Arlington, VA: National Association for College Admission Counseling. Retrieved from http://www.nacacnet.org/research/PublicationsResources/Marketplace/discussion/Pages/TestPreparationDiscussionPaper.aspx.
  • Diamond, J. B., & Spillane, J. P. (2004). High-stakes accountability in urban elementary schools: Challenging or reproducing inequality? Teachers College Record, 106, 1145–1176.
  • Domingue, B. W., & Briggs, D. C. (2009). Using linear regression and propensity score matching to estimate the effect of coaching on the SAT. Multiple Linear Regression Viewpoints, 35(1), 12–29.
  • Firestone, W. A., Camilli, G., Yurecko, M., Monfils, L., & Mayrowetz, D. (2000). State standards, socio-fiscal context and opportunity to learn in New Jersey. Education Policy Analysis Archives, 8(35). Retrieved from http://olam.ed.asu.edu/epaa/v8n35.
  • Fuller, B., Gesicki, K., Kang, E., & Wright, J. (2006). Is the No Child Left Behind Act working? The reliability of how states track achievement (Working Paper No. 06-1). Stanford, CA: Policy Analysis for California Education.
  • Gillborn, D., & Youdell, D. (2000). Rationing education: policy, practice, reform, and equity. Buckingham, England: Open University Press.
  • Haladyna, T. M., Nolan, S. B., & Haas, N. S. (1991). Raising standardized achievement test scores and the origins of test pollution. Educational Researcher, 20(5), 2–7.
  • Hambleton, R. K., Jaeger, R. M., Koretz, D., Linn, R. L., Millman, J., & Phillips, S. E. (1995, June). Review of the measurement quality of the Kentucky Instructional Results Information System, 1991–1994. Frankfort, KY: Office of Education Accountability, Kentucky General Assembly.
  • Hamilton, L. S., Stecher, B. M., Marsh, J. A., McCombs, J. S., Robyn, A., Russell, J., … Barney, H. (2007). Standards-based accountability under No Child Left Behind: Experiences of teachers and administrators in three states. Santa Monica, CA: RAND Corporation.
  • Haney, W. (2000). The myth of the Texas miracle in education. Education Policy Analysis Archives, 8(1). Retrieved from http://epaa.asu.edu/ojs/article/view/432/828.
  • Herman, J. L., & Golan, S. (1993). The effects of standardized testing on teaching and schools. Educational Measurement: Issues and Practice, 12(4), 20–25, 41–42.
  • Ho, A. D. (2007). Discrepancies between score trends from NAEP and state tests: A scale-invariant perspective. Educational Measurement: Issues and Practice, 26(4), 11–20.
  • Ho, A. D. (2009). A nonparametric framework for comparing trends and gaps across tests. Journal of Educational and Behavioral Statistics, 34, 201–228.
  • Ho, A. D., & Haertel, E. H. (2006). Metric-Free Measures of Test Score Trends and Gaps with Policy-Relevant Examples (CSE Report No. 665). Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing, Center for the Study of Evaluation, University of California, Los Angeles.
  • Ho, A., & Yu, C. (2015). Descriptive statistics for modern test score distributions: Skewness, kurtosis, discreteness, and ceiling effects. Educational and Psychological Measurement, 75, 365–388.
  • Holcombe, R., Jennings, J. L., & Koretz, D. (2013). The roots of score inflation: An examination of opportunities in two states' tests. In G. Sunderman (Ed.), Charting reform, achieving equity in a diverse nation (pp. 163–189). Greenwich, CT: Information Age. Retrieved from http://nrs.harvard.edu/urn-3:HUL.InstRepos:10880587.
  • Jacob, B. A. (2005). Accountability, incentives and behavior: The impact of high-stakes testing in the Chicago public schools. Journal of Public Economics, 89, 761–796. doi:10.1016/j.jpubeco.2004.08.004.
  • Jacob, B. (2007). Test-based accountability and student achievement: An investigation of differential performance on NAEP and state assessments (Working Paper No. 12817). Cambridge, MA: National Bureau of Economic Research.
  • Jacob, R. T., Stone, S., & Roderick, M. (2004). Ending social promotion: The response of teachers and students. Chicago, IL: Consortium on Chicago School Research. Retrieved from http://www.eric.ed.gov/PDFS/ED483823.pdf.
  • Jennings, J. L., & Bearak, J. M. (2014). “Teaching to the test” in the NCLB era: How test predictability affects our understanding of student performance. Educational Researcher, 43, 381–389.
  • Jennings, J. L., Bearak, J. M., & Koretz, D. M. (2011, August). Accountability and racial inequality in American education. Paper presented at the annual meeting of the American Sociological Association, Las Vegas, NV.
  • Jennings, J. L., & Sohn, H. (2014). Measure for measure: How proficiency-based accountability systems affect inequality in academic achievement. Sociology of Education, 87, 125–141.
  • Klein, S. P., Hamilton, L. S., McCaffrey, D. F., & Stecher, B. M. (2000). What do test scores in Texas tell us? Santa Monica, CA: RAND (Issue Paper No. IP-202). Retrieved from http://www.rand.org/publications/IP/IP202/.
  • Koretz, D., & Barron, S. I. (1998). The Validity of Gains on the Kentucky Instructional Results Information System (KIRIS) (MR-1014-EDU). Santa Monica, CA: RAND.
  • Koretz, D., & Béguin, A. (2010). Self-monitoring assessments for educational accountability systems. Measurement: Interdisciplinary Research & Perspective, 8, 92–109. doi:10.1080/15366367.2010.508685.
  • Koretz, D., & Hamilton, L. S. (2006). Testing for accountability in K-12. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 531–578). Westport, CT: American Council on Education/Praeger.
  • Koretz, D., Linn, R. L., Dunbar, S. B., & Shepard, L. A. (1991, April). The effects of high-stakes testing: Preliminary evidence about generalization across tests. In R.L. Linn (Chair), The effects of high stakes testing. Symposium presented at the annual meeting of the American Educational Research Association and the National Council on Measurement in Education, Chicago, IL.
  • Linn, R. L., Graue, M. E., & Sanders, N. M. (1990). Comparing state and district results to national norms: The validity of the claim that “everyone is above average.” Educational Measurement: Issues and Practice, 9(3), 5–14.
  • Lipman, P. (2002). Making the global city, making inequality: The political economy and cultural politics of Chicago school policy. American Educational Research Journal, 39, 379–419.
  • Luna, C., & Turner, C. L. (2001). The impact of the MCAS: Teachers talk about high-stakes testing. The English Journal, 91, 79–87.
  • Madaus, G. F., West, M. M., Harmon, M. C., Lomax, R. G., Viator, K. A., Mungal, C. F., … Sweeney, E. (1992). The influence of testing on teaching math and science in Grades 4–12. Boston, MA: Center for the Study of Testing, Evaluation, and Educational Policy.
  • McNeil, L. M., & Valenzuela, A. (2001). The harmful impact of the TAAS system of testing in Texas: Beneath the accountability rhetoric. In M. Kornhaber & G. Orfield (Eds.), Raising standards or raising barriers? Inequality and high-stakes testing in public education (pp. 127–150). New York, NY: Century Foundation.
  • Neal, D., & Schanzenbach, D. (2010). Left behind by design: Proficiency counts and test-based accountability. Review of Economics & Statistics, 92, 263–283.
  • Pedulla, J. J., Abrams, L. M., Madaus, G. F., Russell, M. K., Ramos, M. A., & Miao, J. (2003). Perceived effects of state-mandated testing programs on teaching and learning: Findings from a national survey of teachers. Boston, MA: National Board on Educational Testing and Public Policy. Retrieved from http://www.bc.edu/research/nbetpp/statements/nbr2.pdf.
  • Powers, D., & Rock, D. (1999). Effects of coaching on SAT I: Reasoning test scores. Journal of Educational Measurement, 36, 93–118.
  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (second edition). Thousand Oaks, CA: Sage.
  • Reback, R. (2008). Teaching to the rating: School accountability and the distribution of student achievement. Journal of Public Economics, 92, 1394–1415.
  • Rubenstein, J. (2000). Cracking the MCAS Grade 10 math. New York, NY: Princeton Review.
  • Shen, X. (2008). Do unintended effects of high-stakes testing hit disadvantaged schools harder? (Unpublished doctoral dissertation). Stanford, CA: Stanford University.
  • Shepard, L. A., & Dougherty, K. C. (1991, April). Effects of high-stakes testing on instruction. Paper presented at the annual meeting of the American Educational Research Association and National Council on Measurement in Education, Chicago, IL.
  • Smith, J. L. (1991). Meanings of test preparation. American Educational Research Journal, 28, 521–542.
  • Smith, M. L., & Rottenberg, C. (1991). Unintended consequences of external testing in elementary schools. Educational Measurement: Issues and Practice, 10(4), 7–11.
  • Stecher, B. (2002). Consequences of large-scale, high-stakes testing on school and classroom practice. In L. Hamilton, B. S. Stecher, & S. P. Klein (Eds.), Test-based accountability: A guide for practitioners and policymakers. Santa Monica, CA: RAND. Retrieved from http://www.rand.org/publications/MR/MR1554/MR1554.ch4.pdf.
  • Stecher, B. M., & Barron, S. I. (2001). Unintended consequences of test-based accountability when testing in “milepost” grades. Educational Assessment, 7, 259–281.
  • Stecher, B. M., Epstein, S., Hamilton, L. S., Marsh, J. A., Robyn, A., McCombs, J. S., … Naftel, S. (2008). Pain and gain: Implementing NCLB in three states, 2004–2006. Santa Monica, CA: RAND. Retrieved from http://www.rand.org/pubs/monographs/2008/RAND_MG784.pdf.
  • Steiner, D. (2009, October 14). Commissioner Steiner's Statement on New York NAEP Performance in Mathematics. Albany, NY: The State Education Department, Office of Communications.
  • Tisch, M. (2009, November 9). What the NAEP results mean for New York. Latham, NY: New York School Boards Association. Retrieved from http://www.nyssba.org/index.php?src = news&refno = 1110&category = On%20Board%20Online%20November%209%202009.
  • University of the State of New York. (2011). New York State Student Information Repository System (SIRS) manual. Albany, NY: Author. Retrieved from http://www.p12.nysed.gov/irs/sirs/archive/2010-11SIRSManual6-2.pdf.
  • Urdan, T. C., & Paris, S. G. (1994). Teachers' perceptions of standardized achievement tests. Educational Policy, 8, 137–157.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.