640
Views
10
CrossRef citations to date
0
Altmetric
Commentary

Assessment of Complex Cognition: Commentary on the Design and Validation of Assessments

References

  • AERA, APA, NCME. (2014). The standards for educational and psychological testing. Washington, DC: AERA.
  • Bennett, R. E. (2010). Cognitively based assessment of, for, and as learning: A preliminary theory of action for summative and formative assessment. Measurement: Interdisciplinary Research and Perspectives, 8, 70–91.
  • Bennett, R. E., & Gitomer, D. H. (2009). Transforming K–12 assessment: Integrating accountability testing, formative assessment, and professional support. In C. Wyatt-Smith & J. Cumming (Eds.), Educational assessment in the 21st century (pp. 43–61). New York, NY: Springer.
  • Bloom, B. (1956). Taxonomy of educational objectives: Cognitive and affective domains (1st ed.). New York, NY: David McKay.
  • Goldman, S. R. (2012). Adolescent literacy: Learning and understanding content. Future of Children, 22, 89–116.
  • Goldman, S. R., & Snow, C. (in press). Adolescent literacy: Development and instruction. In A. Pollatsek & R. Treiman (Eds.), The Oxford handbook of reading. New York, NY: Oxford University Press.
  • Krathwohl, D. (2002). A revision of Bloom's Taxonomy: An overview. Theory into Practice, 41, 121–218.
  • Marion, S. F., & Pellegrino, J. W. (2006). A validity framework for evaluating the technical quality of alternate assessments. Educational Measurement: Issues and Practice, 25, 47–57.
  • Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence-centered design for educational testing. Educational Measurement: Issues and Practice, 25, 6–20.
  • Mislevy, R. J., & Riconscente, M. M. (2006). Evidence-centered assessment design: Layers, concepts, and terminology. In S. Downing & T. Haladyna (Eds.), Handbook of test development (pp. 61–90). Mahwah, NJ: Erlbaum.
  • National Research Council. (2003). Assessment in support of learning and instruction: Bridging the gap between large-scale and classroom assessment. Washington, DC: National Academies Press.
  • Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academies Press.
  • Reisman, A. (2012). Reading like a historian: A document-based history curriculum intervention in urban high schools. Cognition and Instruction, 30, 86–112.
  • Ruiz-Primo, M. A., Shavelson, R. J., Hamilton, L., & Klein, S. (2002). On the evaluation of systemic science education reform: Searching for instructional sensitivity. Journal of Research in Science Teaching, 39, 369–393.
  • Schoenbach, R., Greenleaf, C., & Murphy, L. (2012). Reading for understanding: How reading apprenticeship improves disciplinary learning in secondary and college classrooms, 2nd Edition. San Francisco: Jossey-Bass.
  • Smith, C. L., Wiser, M., Anderson, C. W., & Krajcik, J. (2006). Implications of research on children's learning for standards and assessment: A proposed learning progression for matter and the atomic molecular theory. Focus article. Measurement: Interdisciplinary Research and Perspectives, 14, 1–98.
  • Walker, A. (1994). The Complete Stories. London: The Women's Press. (p. 107).
  • Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco, CA: Jossey-Bass.
  • Wilson, M. (2004a). Constructing Measures: An Item Response Modeling Approach. Mahwah, NJ: Erlbaum.
  • Wilson, M. (2004b). A perspective on current trends in assessment and accountability: Degrees of coherence. In M. Wilson (Ed.), Towards coherence between classroom assessment and accountability. 103rd Yearbook of the National Society for the Study of Education, Part II (pp. 272–283). Chicago, IL: University of Chicago Press.
  • Wilson, M. (2009). Measuring progressions: Assessment structures underlying a learning progression. Journal for Research in Science Teaching, 46, 716–730.
  • Wilson, M. (2014). Considerations for measuring learning progressions where the target learning is represented as a cycle. Pensamiento Educativo. Revista De Investigación Educacional Latinoamericana, 51, 156–174.
  • Wilson, M., & Draney, K. (2013). A strategy for assessment of competencies in higher education: The BEAR assessment system. In S. Blomeke, O. Zlatkin-Troitschanskaia, C. Kuhn, & J. Fege (Eds.), Modeling and measuring competencies in higher education: Tasks and challenges (pp. 61–80). Rotterdam, The Netherlands: Sense Publishers.
  • Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13, 181–208.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.