642
Views
10
CrossRef citations to date
0
Altmetric
Original Article

Developing Assessments of Content Knowledge for Teaching Using Evidence-centered Design

, , , , &
Pages 91-111 | Published online: 27 Apr 2020
 

ABSTRACT

Assessments of teacher content knowledge are increasingly designed to provide evidence of the content knowledge needed to carry out the moment-to-moment work of teaching. Often these assessments focus on content knowledge only used in teaching with the goal of testing types of professional content knowledge. In this paper, we argue that while this general approach has produced powerful exemplars of new types of assessment tasks, it has been less successful in developing tests that provide more general evidence of the range of content knowledge associated with particular teaching practices. To illustrate a more systematic approach, we describe the use of evidence-centered design (ECD) to develop an assessment of content knowledge for teaching (CKT) in the area of secondary physics-energy.

Endnotes

The M2* statistic (Cai & Hansen, Citation2013; Cai, Maydeu-Olivares, Coffman, & Thissen, Citation2006; Maydeu-Olivares & Joe, Citation2006) was developed to address shortcomings of the G2 or X2 statistics when sparsity exists in the contingency table. The M2* was used as a test of the absolute fit of the model, whereas the RMSEA (computed using lower order marginal tables) was used as a test of adequate or close fit.

There is extensive literature on the bi-factor model in the context of social science data (see Rodriguez, Reise, & Haviland, Citation2016). This is a hierarchical model that is popular because the model formulation often reflects the theory of the construct that the data is meant to assess. The bi-factor model fits a general or broad construct that also has specific or group factors, which are more narrow constructs. These secondary factors are orthogonal to the general construct and, therefore, can be interpreted as residuals. All items load on the general construct, and exploratory bi-factor analysis can determine which specific/group factor, if any, each item belongs to.

Fitting the IRT models was done using the R (R Development Core Team, Citation2018) package mirt (Chalmers, Citation2012), which employs full-information maximum likelihood estimation. The IRT parameters are also converted into the familiar factor-loadings to facilitate the (generalized linear) EFA. The model fit statistics were likewise calculated using the R package mirt. The rotations of the factors required for an EFA procedure were done using the psych package in R, which can perform a bi-factor orthogonal rotation (Jennrich & Bentler, Citation2011). As per Jennrich and Bentler, all factor loadings below.30 were constrained to 0, and a confirmatory bi-factor model was fitted to the data to obtain final parameter estimates.

Notes

2 Readers interested in a more detailed explanation of the CR scoring process can refer to Etkina et al. (Citation2018).

Additional information

Funding

This work was supported by the National Science Foundation [DRL 1222732]. The findings and opinions expressed in this report do not reflect the positions or policies of the National Science Foundation.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 290.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.