356
Views
9
CrossRef citations to date
0
Altmetric
Articles

Evaluating the Instructional Sensitivity of Four States' Student Achievement Tests

References

  • Airasian, P. W., & Madaus, G. F. (1983). Linking testing and instruction: Policy issues. Journal of Educational Measurement, 20, 103–118.
  • Anderson, J. (2013, March 31). Curious grade for teachers: Nearly all pass. The New York Times, p. A1.
  • Bill and Melinda Gates Foundation. (2010). Learning about teaching: Initial findings from the Measures of Effective Teaching Project. Seattle, WA: Author.
  • Bill and Melinda Gates Foundation. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains. Seattle, WA: Author.
  • Brutten, S. R., Mouw, J. T., & Perkins, K. (1992). Measuring the instructional sensitivity of ESL reading comprehension items. Southern Illinois Working Papers in Linguistics and Language Teaching, 1, 1–9.
  • Chen, J. (2012). Impact of instructional sensitivity on high-stakes achievement test items: A comparison of methods (Unpublished doctoral dissertation). Lawrence: University of Kansas.
  • Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Erlbaum.
  • Commission on Instructionally Supportive Assessment. (2001). Building tests to support instruction and accountability (W. J. Popham, Chair). Washington, DC: National Education Association.
  • Cox, R. C., & Vargas, J. S. (1966, February). A comparison of item-selection techniques for norm referenced and criterion referenced tests. Paper presented at the annual conference of the National Council on Measurement in Education, Chicago, IL.
  • D'Agostino, J. V., Welsh, M. E., & Corson, N. M. (2007). Instructional sensitivity of a state standards-based assessment. Educational Measurement, 12(1), 1–22.
  • Danielson, C. (2007). Enhancing professional practice: A framework for teaching. Alexandria, VA: Association for Supervision and Curriculum Development.
  • Ferguson, R. F. (2008). The TRIPOD Project framework. Cambridge, MA: Harvard University Press.
  • Ferguson, R. F., & Danielson, C. (2014). How Framework for Teaching and Tripod 7Cs evidence distinguish key components of effective teaching. In T. J. Kane, K. A. Kerr, & R. C. Pianta (Eds.), Designing teacher evaluation systems: New guidance from the measures of effective teaching project (pp. 98–143). San Francisco, CA: Jossey-Bass.
  • Freeman, D. J., Belli, G. M., Porter, A. C., Floden, R. E., Schmidt, W. H., & Schwille, J. R. (1983). The influence of different styles of textbook use on instructional validity of standardized tests. Journal of Educational Measurement, 20, 259–270.
  • Gamoran, A., Porter, A. C., Smithson, J., & White, P. A. (1997). Upgrading high school mathematics instruction: Improving learning opportunities for low-achieving, low-income youth. Educational Evaluation and Policy Analysis, 19, 325–338.
  • Glaser, R. (1963). Instructional technology and the measurement of learning outcomes: Some questions. American Psychologist, 18, 519–521.
  • Greer, E. A. (1995). Examining the validity of a new large-scale reading assessment instrument from two perspectives. Urbana, IL: Center for the Study of Reading.
  • Grossman, P., Cohen, J., Ronfeldt, M., & Brown, L. (2014). The test matters: The relationship between classroom observation scores and teacher value-added on multiple types of assessment. Educational Researcher, 43, 293–303.
  • Grossman, P., Loeb, S., Cohen, J., & Wyckoff, J. (2013). Measure for measure: The relationship between measures of instructional practice in middle school English language arts and teachers' value-added scores. American Journal of Education, 119, 445–470.
  • Haladyna, T. M., & Roid, G. H. (1981). The role of instructional sensitivity in the empirical review of criterion-referenced test items. Journal of Educational Measurement, 18(1), 39–53.
  • Hambleton, R. K., Swaminathan, H., Algina, J., & Coulson, D. B. (1978). Criterion-referenced testing and measurement: A review of technical issues and developments. Review of Educational Research, 48(1), 1–47.
  • Hancock, G. R. (1997). Correlation/validity coefficients disattenuated for score reliability: A structural equation modeling approach. Educational and Psychological Measurement, 57, 598–606.
  • Hanson, R. A., McMorris, R. F., & Bailey, J. D. (1986). Difference in instructional sensitivity between item formats and between achievement test items. Journal of Educational Measurement, 23, 1–12.
  • Hill, H. C., Blunk, M., Charalambous, C., Lewis, J., Phelps, G. C., Sleep, L., & Ball, D. L. (2008). Mathematical knowledge for teaching and the mathematical quality of instruction: An exploratory study. Cognition and Instruction, 26, 430–511.
  • Hill, H. C., & Grossman, P. (2013). Learning from teacher observations: Challenges and opportunities posed by new teacher evaluation systems. Harvard Educational Review, 83, 371–384.
  • Hill, H. C., Kapitula, L., & Umland, K. (2011). A validity argument approach to evaluating teacher value-added scores. American Educational Research Journal, 48, 794–831.
  • Kosecoff, J. B., & Klein, S. P. (1974, April). Instructional sensitivity statistics appropriate for objectives-based test items. Paper presented at the annual conference of the National Council on Measurement in Education, Chicago, IL.
  • La Paro, K. M., Pianta, R. C., & Stuhlman, M. (2004). The Classroom Assessment Scoring System: Findings from the prekindergarten year. Elementary School Journal, 104, 409–426.
  • Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education: Evidence from recent RAND research. Santa Monica, CA: RAND.
  • Mashburn, A. J., Pianta, R. C., Hamre, B. K., Downer, J. T., Barbarin, O. A., Bryant, D., … Howes, C. (2008). Measures of classroom quality in prekindergarten and children's development of academic, language, and social skills. Child Development, 79, 732–749.
  • Masters, J. R. (1988, April). A study of the differences between what is taught and what is tested in Pennsylvania. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans, LA.
  • Muthen, B. O., Huang, L., Jo, B., Khoo, S., Goff, G. N., Novak, J. R., & Shih, J. C. (1995). Opportunity-to-learn effects on achievement: Analytical aspects. Educational Evaluation and Policy Analysis, 17, 371–403.
  • Muthen, B. O., Kao, C. F., & Burstein, L. (1991). Instructionally sensitive psychometrics: Application of a new IRT-based detection technique to mathematics achievement test items. Journal of Educational Measurement, 28, 1–22.
  • National Council on Teacher Quality. (2011). Trends and early lessons on teacher evaluation and effectiveness policies. Washington, DC: Author.
  • Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System. Baltimore, MD: Brookes.
  • Polikoff, M. S. (2010). Instructional sensitivity as a psychometric property of assessments. Educational Measurement: Issues and Practice, 29(4), 3–14.
  • Polikoff, M. S. (2015). The stability of observational and student survey measures of teaching effectiveness. American Journal of Education, 121, 183–212.
  • Popham, J. W. (1971). Indices of adequacy for criterion-reference test items. In J. W. Popham (Ed.), Criterion-referenced measurement (An introduction) (pp. 79–98). Englewood Cliffs, NJ: Educational Technology Publications.
  • Popham, J. W. (2007). Instructional insensitivity of tests: Accountability's dire drawback. Phi Delta Kappan, 89, 146–155.
  • Popham, J. W., & Husek, T. R. (1969). Implications of criterion-referenced measurement. Journal of Educational Measurement, 6, 1–9.
  • Popham, J. W., & Ryan, J. M. (2012). Determining a high-stakes test's instructional sensitivity. Paper presented at the annual meeting of the National Council on Measurement in Education, Vancouver, Canada.
  • Porter, A. C. (2002). Measuring the content of instruction: Uses in research and practice. Educational Researcher, 31(7), 3–14.
  • Raudenbush, S. W., & Jean, M. (2014). To what extent do student perceptions of classroom quality predict teacher value added? In T. J. Kane, K. A. Kerr, & R. C. Pianta (Eds.), Designing teacher evaluation systems: New guidance from the measures of effective teaching project (pp. 170–202). San Francisco, CA: Jossey-Bass.
  • Rovinelli, R. J., & Hambleton, R. K. (1977). On the use of content specialists in the assessment of criterion-referenced test item validity. Dutch Journal of Educational Research, 2, 49–60.
  • Ruiz-Primo, M. A., Shavelson, R. J., Hamilton, L., & Klein, S. P. (2002). On the evaluation of systematic science education reform: Search for instructional sensitivity. Journal of Research in Science Teaching, 39, 369–393.
  • Wiley, D. E., & Yoon, B. (1995). Teacher reports on opportunity to learn: Analyses of the 1993 California Learning Assessment System (CLAS). Educational Evaluation and Policy Analysis, 17, 355–370.
  • Yoon, B., Burstein, L., & Gold, K. (1991). Assessing the content validity of teachers' reports of content coverage and its relationship to student achievement. Los Angeles, CA: Center for Research on Evaluation, Standards, and Student Testing.
  • Yoon, B., & Resnick, L. B. (1998). Instructional validity, opportunity to learn, and equity: New standards examinations for the California Mathematics Renaissance. Los Angeles, CA: Center for the Study of Evaluation.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.