3,243
Views
17
CrossRef citations to date
0
Altmetric
Original Articles

Shrinkage of Value-Added Estimates and Characteristics of Students with Hard-to-Predict Achievement Levels

, &
Pages 1-10 | Received 01 Jan 2015, Accepted 01 Apr 2016, Published online: 28 Jun 2016

ABSTRACT

It is common in the implementation of teacher accountability systems to use empirical Bayes shrinkage to adjust teacher value-added estimates by their level of precision. Because value-added estimates based on fewer students and students with “hard-to-predict” achievement will be less precise, the procedure could have differential impacts on the probability that the teachers of fewer students or students with hard-to-predict achievement will be assigned consequences. This article investigates how shrinkage affects the value-added estimates of teachers of hard-to-predict students. We found that teachers of students with low prior achievement and who receive free lunch tend to have less precise value-added estimates. However, in our sample, shrinkage had no statistically significant effect on the relative probability that teachers of hard-to-predict students received consequences.

1. Introduction

Due to the incentives provided by the federal Race to the Top program and Elementary Secondary Education Act (ESEA) waivers, districts and states rapidly implemented new teacher evaluation systems to make high-stakes decisions about tenure, pay, and retention, based in part on statistical measures of teacher effectiveness. These evaluation systems generally incorporated multiple measures of effective teaching, such as student achievement growth, classroom observations, and student learning objectives (Mihaly et al. Citation2013). One of the most controversial has been the student achievement growth component, for which districts have used different approaches. One common approach has been to use a value-added model (VAM) to estimate teachers' contributions to student achievement on standardized assessments. Under the 2016 reauthorization of ESEA, the federal government gave states more control over their teacher evaluation systems. As states rethink their evaluation systems, there is an even greater need to understand how statistical models like VAMs contribute to assigning teachers to performance categories.

VAMs predict individual student achievement based on the student's characteristics, including baseline achievement, and compare this prediction with the actual achievement of a teacher's students. The prediction is derived using data on other students in the state or district and represents what we would expect the student to achieve if he or she were taught by the average teacher. The difference between how a teacher's students actually performed and how they were predicted to perform represents the estimate of the teacher's value added to student achievement.

The precision of the teacher's value-added estimate is related to the average differences between the actual and predicted performance of the students in a teacher's class. Precision depends both on the number of students matched to the teacher and the ability of the model to predict the achievement of the specific students in the teacher's class. All else being equal, the estimates of teachers matched to fewer students are more likely to be imprecise because one or two students with unusually high or low achievement growth can have a larger influence on the average differences between the actual and predicted performance of the students in the teacher's class. Differences between students' actual and predicted performance are related to the existence of factors that influence achievement but are not observed in the data. Some students may have achievement that is hard to predict in the sense that their background characteristics are not very informative about their future achievement. The role of this potential heteroscedasticity—variation across student groups in the difference between their actual and predicted achievement—has received relatively little attention in the value-added literature.Footnote1 However, Stacy et al. (Citation2012) found that students with low socioeconomic status and low prior achievement had larger differences between actual and predicted achievement than more-advantaged students. They showed that the value-added estimates of teachers who had many disadvantaged students were less stable over time.

It is common in the implementation of teacher accountability systems—including in the District of Columbia, New York City, and Los Angeles—to use a procedure known as empirical Bayes (EB) shrinkage to adjust the teacher value-added estimates by their level of precision.Footnote2 The adjusted estimate is a weighted average of the teacher's initial value-added estimate and the value-added of an average teacher, with more precise estimates receiving greater weight. This procedure is called shrinkage because less precise estimates receive lower weight in the adjustment and are shrunk toward the average. Because value-added estimates based on fewer students and students with hard-to-predict achievement will be less precise, the procedure could have differential impacts on the probability that the teachers of fewer students or students with hard-to-predict achievement will be assigned consequences. For example, in the absence of shrinkage, teachers with very few students could be more likely to have estimates in the extremes of the distribution.Footnote3 Similarly, if teachers of disadvantaged students have imprecise initial value-added estimates, shrinkage could reduce the probability that these teachers' estimates are in the extremes of the distribution.

How shrinkage affects the probability that teachers of particular types of students will receive consequences depends on the designs of accountability systems. Many evaluation systems use thresholds to identify highly effective or ineffective teachers and assign consequences. Because consequences are often discrete—for example, whether to retain the teacher or whether to require the teacher to receive professional development—very small changes in a teacher's estimate near the threshold can have substantial consequences. In designing a high-stakes accountability system, one goal is to avoid classification errors.Footnote4

In these systems, shrinkage is conceptually appealing because it produces estimates of teacher effectiveness that are conservative in the assignment of consequences to teachers. Shrinkage estimates are conservative because an imprecise estimate will be shrunk more to the overall average, and the teacher might be less likely to be assigned consequences as a result. Because precision depends on the number of students and the characteristics of the students in a teacher's class, shrinkage could potentially reduce the probability that teachers of these types of students receive consequences. This might be desirable in evaluation systems because differences in teachers' probabilities of being misclassified could be considered unfair and have deleterious effects on teachers' incentives to teach classes that include certain groups of students.

This article investigates how shrinkage affects the value-added estimates of teachers of hard-to-predict students, focusing on the context of threshold-based accountability systems. We examine three main questions:

1.

What student characteristics are correlated with having hard-to-predict test scores?

2.

Are these student characteristics correlated with how much teacher value-added estimates change when shrinkage methods are used?

3.

Are these student characteristics correlated with how much shrinkage affects the probability that these teachers' estimates exceed policy-relevant thresholds?

Consistent with heteroscedasticity, we find that the achievement of particular students, such as students with low prior achievement and who receive free lunch is harder to predict. Teachers of these types of students tend to have less precise value-added estimates and shrinkage increases their estimates' precision. Shrinkage also reduces the absolute value of the value-added estimates of teachers of hard-to-predict students. However, in our data, shrinkage has no statistically significant effect on the relative probability that teachers of hard-to-predict students receive consequences.

The main contribution of this article is to examine heteroscedasticity and shrinkage in the context of accountability systems based on thresholds. This differs from other papers on shrinkage, which examine how shrinkage affects teacher ranks (Tate Citation2004; Guarino et al. Citation2015a), and other work on heteroscedasticity, which examines its effect on the precision and inter-year stability of value-added estimates (Stacy et al. Citation2012). Our purpose is to examine how shrinkage affects the probability that teachers of hard-to-predict students are classified at the extremes of the value-added distribution of teacher effectiveness, because many evaluation systems use these thresholds to determine consequences.

2. Theory

To illustrate how heteroscedasticity affects estimates of teacher value added, we present the following VAM: (1) Yigt=λYi,g-1+α'Xigt+θt+εigt,(1) where Yigt is the test score of student i in grade g who has teacher t and Yi, g − 1 is the prior test score for student i in the previous grade g-1. The vector Xigt denotes control variables for student demographic characteristics, θt is a vector of teacher effects,Footnote5 and ϵigt is the student-level error term.

Under homoscedasticity, all student-level errors ϵigt have the same variance—the accuracy of a predicted score does not depend on the characteristics of the student. However, estimates of precision typically allow for the possibility of heteroscedastic errors. One possible cause of heteroscedasticity is measurement error in test scores. Typically, scores near the center of the distribution of a standardized test are more reliable than scores in the tails of the distribution (Resch and Isenberg Citation2014).Footnote6 Consequently, students who are likely to be at the top or the bottom of achievement distribution might tend to have large differences between their actual and predicted scores. Another reason for heteroscedasticity is the presence of unobserved factors that affect achievement. For example, we do not observe parental involvement or students' motivation, which could affect achievement and vary across different types of students.

Shrinkage places less weight on imprecise initial value-added estimates when constructing the final value-added estimates. If teachers who are assigned students with large residuals tend to have imprecise estimates of initial value added, then the shrinkage procedure will proportionately shrink the estimates of these teachers more and the final estimates of these teachers will be proportionately more precise than without shrinkage. This relationship means that shrinkage reduces but does not eliminate differences in variances of value-added estimates.

To see this, consider the EB shrinkage procedure outlined in Morris (Citation1983). A teacher's shrinkage estimate is the weighted average of the teacher's preshrinkage value-added estimate and the estimate for the average teacher. Let θ^t be the mean-centered preshrinkage point estimate for teacher t from the value-added regression model, and let the associated heteroscedasticity-robust variance estimate be var ^[θ^t]=σ^t2. The EB estimate for a teacher is approximately equal to a precision-weighted average of θ^t and the overall mean of all estimated teacher effects.Footnote7 Because the overall mean of the estimated teacher effects is zero by design, the teacher's EB estimate θ^t EB can be written as follows: (2) θ^t EB σ^2σ^2+σ^t2θ^t,(2) where σ^ is an estimate of the standard deviation of teacher effects (purged of sampling error), which is constant for all teachers. The term σ^2/σ^2+σ^t2 must be less than 1, so the EB estimate always has a smaller absolute value than the initial estimate.

Although heteroscedasticity in student background characteristics does not affect the preshrinkage point estimates, it does affect the variance of the teacher estimates and, thus, the EB estimates. The change in the EB estimate in response to a change in the variance of the preshrinkage estimate is given by the following: (3) θ^t EB σ^t2=-w^t2σ^2θ^t,(3)

where w^tσ^2/σ^2+σ^t2, the weight on the preshrinkage estimate. The negative sign indicates that estimates of teachers who teach students with characteristics that produce large residuals are shrunk more than those of teachers who teach students with smaller residuals. The extra shrinkage is largest for teachers with larger unshrunken estimates and for teachers with more of the types of students who tend to produce large residuals. The extra shrinkage will tend to distort the relationship between teacher effectiveness and teachers' average student characteristics related to heteroscedasticity (or other characteristics of teachers or classrooms related to heteroscedasticity). This is an example of the trade-off implicit in shrinkage between introducing bias and the choice to minimize the mean squared error (MSE) of the estimates;Footnote8 the negative relationship in Equation Equation(3) does not indicate that these teachers are more or less effective on average—it is instead an artifact of being conservative in handing out extreme value-added scores to teachers.

One benefit of the trade-off between bias and MSE is the increased efficiency of the EB estimator. This can be seen in the lower variances associated with the EB estimates. Ignoring the sampling error associated with σ^2 and σ^t2, the variance of a teacher's EB estimate is approximately equal toFootnote9 (4) var ^θ^t EB w^tσ^t2.(4)

Thus, the EB estimate has a smaller variance than that of the preshrinkage estimate, because w^t is less than 1.

Heteroscedasticity affects var ^θ^t EB through the variance on the preshrinkage estimates. The change in the variance of a teacher's EB estimate in response to a change in the variance of the preshrinkage estimate is proportional to the square of the weight on the point estimate in the EB calculation: (5) var ^θ^t EB σ^t2=w^t2.(5)

Because w^t is less than one but greater than 0, shrinkage reduces but does not eliminate heteroscedasticity from the EB estimates. This suggests that shrinkage might mitigate but will not eliminate the differential probability that teachers of hard-to-predict students are classified in the extremes of the value-added distribution.

3. Value-Added Model and Data

We estimate a VAM that corresponds closely (though not exactly) with the model estimated for DCPS for teacher evaluations in the 2011–2012 school year (described in Isenberg and Hock Citation2012). This model has many features that are common in VAMs estimated in other states or districts. Six of nine teacher-level VAMs for which we could obtain documentation use teacher fixed effects and apply a post-hoc adjustment to apply empirical Bayes' shrinkage to the fixed effect estimates, the method we use for our analysis. The other three VAMs bring about shrinkage as part of the regression model by using teacher random effects.Footnote10 The popularity of the two-step approach may be due in part to the desire to avoid the assumption of a random effects model that teacher effectiveness is uncorrelated with student characteristics. Using simulated data, Guarino, Reckase, and Wooldridge (Citation2015b) suggest that teacher fixed effects models are more robust to circumstances in which there is systematic sorting of students to teachers—that is, to the type of sorting that would violate this assumption of a random effects model.

We estimate the VAM separately for teachers of elementary (grades 4 and 5) and middle school (grades 6 through 8) students: (6) Ytig=λgYi(g-1)+ωgZi(g-1)+α'Xi+δg+η'Tti+εtig,(6) where Ytig is the posttest score for student i in grade g linked to teacher t and Yi(g − 1) is the same-subject pretest for student i in grade g-1 during the previous year. The variable Zi(g − 1) denotes the pretest in the opposite subject. Thus, when estimating teacher effectiveness in math, Yig and Yi(g − 1) represent math tests, with Zi(g − 1) representing reading tests and vice versa. The pretest scores capture prior inputs into student achievement, and the associated coefficients λg and ωg vary by grade. The vector Xi denotes control variables for individual student background characteristics—specifically, indicators for free lunch eligibility, reduced-price lunch eligibility, limited English proficiency, and special education status. The coefficients on these characteristics are constrained to be the same across grades within each grade span. The vector δg includes grade indicators.

The vector Tit contains one indicator variable for each teacher. A student contributes one observation to the model for each teacher to whom the student is linked. This contribution is based on a roster confirmation process that enables teachers to indicate whether and for how long they have taught the students on their administrative rosters and to add any students who are not listed on their administrative rosters (Hock and Isenberg Citation2012). Students are weighted in the regression according to their dosage, which indicates the amount of time the teacher taught the student.Footnote11 The vector η includes one coefficient for each teacher. Finally, ϵtig is the random error term.

Measurement error in the pretest scores will attenuate the estimated relationship between the pre- and posttest scores. To avoid this bias, the VAM in Equation (Equation6) is estimated in two regression steps to account for both measurement error in the pretests and clustering of standard errors at the student level, which cannot be implemented simultaneously. In the first step, we estimate Equation (Equation6) adjusting for measurement error in the pretests using an errors-in-variables correction (eivreg in Stata) that relies on published information on the test-retest reliability of the District of Columbia Comprehensive Assessment System (DC CAS) (Buonaccorsi Citation2010). We then used the measurement-error corrected values of the pretest coefficients to calculate adjusted posttest scores by subtracting the predicted effects of the pretest scores from the posttest scores: (7) G^tig=Ytig-λ^gYi(g-1)-ω^gZi(g-1).(7)

Notably, because the measurement error correction accounts only for measurement error that is constant across the test score distribution, any measurement error that varies across the distribution will contribute to heteroscedasticity.

In the second step, we then estimate teacher effects by regressing the adjusted posttest scores from the first step on student background characteristics, grade indicators, and teacher indicators, clustering standard errors by student using a cluster-robust sandwich variance estimator (Liang and Zeger Citation1986; Arellano Citation1987): (8) G^tig=α'Xi+δg+η'Tti+εtig(8)

The initial teacher value-added estimates are the coefficients on the teacher indicators in this regression, and their variance is given by the squared standard errors of the coefficient estimates. We then remove any teachers with fewer than 15 students, and re-center the value-added estimates to have a mean of 0. Finally, we calculate the EB estimates by applying Equation Equation(2) to the initial estimates and variances from the main value-added regression. We also calculate the variance of the EB estimates using Equation Equation(4).

We use data from DCPS and the Office of the State Superintendent of Education of the District of Columbia (OSSE). The data contain information on students' test scores and demographic characteristics and enable students to be linked to their teachers. In DCPS, students in grades 3 through 8 and 10 were tested in math and reading using the DC CAS tests. Our analysis is based on students who were in grades 4 through 8 during the 2011–2012 school year and who had both pre- and posttest scores. To enable us to compare students across grades, we standardize student test scores within subject, year, and grade to have a mean of 0 and a standard deviation of 1. We exclude students who are repeating the grade so that in each grade, we compare only students who completed the same examinations.

Teachers vary in the characteristics of students that they teach. displays the summary statistics for the 297 elementary school teachers (top panel) and 182 middle school teachers (bottom panel) in our sample. For simplicity, we restrict our analysis to math teachers and exclude teachers linked to fewer than 15 or more than 150 students. Excluding teachers who are matched to fewer than 15 students follows the districts' implementation of its teacher accountability system.

Table 1. Teacher-level summary statistics.

Because groups of students with large residuals will affect the precision of teacher value-added estimates only if they are unevenly distributed across teachers, the table reports the teacher-level averages of student characteristics. For homeroom teachers—typically elementary school teachers—these values are the average characteristics of the students in the teachers' classrooms, whereas for departmentalized teachers, these values are averaged across multiple classes. shows that there is wide variation in students' test scores and classroom composition across teachers. For example, in math, the average pretest scores of teachers' students range from −1.34 to 1.36 standard deviations in elementary school and from −1.12 to 1.16 standard deviations in middle school. Teachers also vary substantially in the percentage of students eligible for free and reduced-price lunch, limited English proficiency, and special education students they teach.

also reports the number of student-equivalents taught by teachers, which weights the number of students taught by teachers by their dosages—that is, the portions of time that individual students were enrolled in their teachers' classes. Elementary school teachers are typically linked to fewer students than middle school teachers, consistent with more departmentalized teachers in middle school. However, even within a grade span there is considerable variation in student-equivalents across teachers, which could affect the precision of value-added estimates.

Postshrinkage value-added estimates and variances are smaller than the preshrinkage estimates. presents the preshrinkage and shrinkage value-added estimates and their estimated variances. For elementary school teachers, the preshrinkage estimates range from −0.90 to 0.96 and the estimated preshrinkage variances range from 0 to 0.07. Because the shrinkage procedure assigns the preshrinkage estimates and variances weights that are less than 1—the average weight for elementary teachers is 0.85—the shrinkage estimates have a smaller range than the preshrinkage estimates and the shrinkage variances are smaller than the preshrinkage variances. For elementary school teachers, the postshrinkage estimates range from −0.77 to 0.71 and the estimated postshrinkage variances range from 0.04 to less than 0.005.

In the next section, we describe our methodology to examine how differences in the characteristics of teachers' students are related to precision and how accounting for imprecision using shrinkage affects the probability that teachers are classified in the extremes of the value-added distribution based on their students' characteristics.

4. Empirical Approach

To describe heteroscedasticity in the student-level errors, we estimate the relationship between the squared residuals from the VAM—a measure of the accuracy of the prediction—and student characteristics using the following student-teacher-level regression: (9) ε^tig2=α+βXi+υtig.(9)

The outcome ε^tig2 is the squared residuals for student i in grade g with teacher t from the student-teacher-level regression of student test scores on student characteristics and grade and teacher indicators. Because the VAM contains teacher indicators, these residuals are the remaining variation in student outcomes after taking into account the average differences in the effectiveness of students' teachers. Thus, unpredictable outcomes for certain types of students do not simply reflect assignment of these students to teachers with a wider distribution of ability levels. Instead, the residuals measure the remaining difference between these students' outcomes and those of other students taught by the teacher. The vector Xi denotes student characteristics (that is, average math and reading pretest scores, eligibility for free or reduced-price lunches, limited English proficiency, and special education status), and υigt is an error term.

The regression in Equation Equation(9) is intended to provide a parsimonious description of heteroscedasticity and not a formal test for its presence. For ease of interpretation, we estimate these regressions separately for each student characteristic and grade span; the sole exceptions to the separate regressions are eligibility for free or reduced-price lunch, which are entered simultaneously into the regression.Footnote12 To address repeated observations of students, we estimate standard errors that account for clustering in the residual at the student level and weight the regressions by the student-teacher dosages. As a sensitivity analysis, we also estimate Equation Equation(9) using the natural log of the squared residual as the dependent variable.

After examining which types of students tend to have higher residuals, we analyze the relationship between these student characteristics and the precision and magnitude of their teachers' pre- and postshrinkage value-added estimates. We estimate the following teacher-level bivariate regression model, again using separate regressions for each student characteristic and grade span: (10) Wt=α+βXt+εt,(10)

where Wt represents a measure of the precision or magnitude of the pre- or postshrinkage value-added estimate for teacher t. Specifically, for both pre- and postshrinkage estimates, we examine as outcomes the natural log of the estimates' variance, the absolute value of the estimates, and indicators for the estimates being above or below a given percentile of the value-added distribution. We examine the absolute value rather than the original value of the estimate because value-added estimates have mean zero (for the average teacher), so the further from zero, the more extreme the estimate of teacher effectiveness in a good (positive) or bad (negative) direction. When Wt represents an indicator for being above or below a cut-point, we present results based on a linear probability model, but as a sensitivity analysis we also estimate a logit model. The vector Xt represents the average characteristics of the teacher's students from , including measures of the student-equivalents taught by the teacher—the natural log of the total or the raw total. As we did in the student-teacher-level regressions, for ease of interpretation, we estimate separate regressions for each student characteristic (with the exception of eligibility for free or reduced-price lunch) and grade span. We calculate robust standard errors for each teacher.

5. Results

presents the results from the regressions that examine the relationship between student characteristics and the various outcomes. The first column displays the results from the student-teacher-level regressions of the squared residuals (Equation Equation(9)) on student characteristics and provides evidence that some groups of students have significantly larger residuals than others. This is consistent with heteroscedasticity and shows which types of students have achievement that is hard to predict. In elementary school, the hard-to-predict students are those who are eligible for free lunch, receive special education services, and have lower pretest scores.Footnote13

Table 2. Effects of student characteristics on teacher value-added estimates.

The same characteristics are also significantly associated with larger residuals in middle school, as is limited English proficiency. When we substituted the natural log of the squared residual as the dependent variable, we obtained results with very similar results: for each variable, the direction (positive or negative) was identical to the results shown in , and the same variables were statistically significant (not shown). These results are consistent with those of Stacy et al. (Citation2012), who found higher residuals for students eligible for free lunch and students with lower prior test scores, but do not explicitly examine special education or limited English proficiency status.

Consistent with the significant relationships between particular student characteristics and the student-level residuals, elementary school teachers assigned hard-to-predict students also have less precise value-added estimates. In column 2 of , we provide the results from teacher-level regressions, which separately regress the natural log of teachers' preshrinkage variances on each of the characteristics of teachers' students and the natural log of the teachers' number of student-equivalents (Equation Equation(10)).Footnote14 For elementary school teachers, a one-standard-deviation decrease in students' average pretest scores in math is significantly associated with a 20% increase in the teacher's preshrinkage variance. Lower pretest scores in reading and free lunch eligibility are also significantly associated with higher preshrinkage variances in elementary school. Interestingly, limited English proficiency is significantly correlated with lower preshrinkage variances in elementary school, and there is no significant relationship between the preshrinkage variances and special education status. Having a larger number of students is associated with lower preshrinkage variances.Footnote15

The same general patterns hold in middle school. Lower pretest scores in math and reading, free lunch eligibility, and special education status are significantly associated with higher preshrinkage variances. The magnitudes of the associations are generally larger for middle school teachers than for elementary school teachers. For example, a one standard deviation decrease in students' math scores is significantly associated with a 64% increase in a teacher's preshrinkage variance in the middle school grades. Column 2 of also investigates the relationship between the natural log of the teachers' number of student-equivalents and the variance.

These same relationships with student characteristics are significantly smaller for postshrinkage variances. Recall that shrinkage increases the precision of estimates at the cost of intentionally introducing bias. This effect of shrinkage is reflected in columns 3 and 4 of . In column 3, we examine the relationships among teachers' postshrinkage variances, the characteristics of their students, and teachers' number of student-equivalents. Compared with the preshrinkage variances, the postshrinkage variances have the same number of statistically significant relationships with student characteristics, although the size of these correlations is significantly smaller for the postshrinkage variances. For example, for elementary school students, a one standard deviation decrease in students' average pretest scores in math is significantly associated with a 17% increase in the teacher's postshrinkage variance, compared with a 20% increase in the preshrinkage variance, a statistically significant difference. Statistical tests of the differences between the coefficients for preshrinkage and postshrinkage models (columns 2 and 3, respectively) can be seen in column 4. Similarly, shrinkage also reduces the size of the relationships between free lunch eligibility and teachers' number of student-equivalents and the estimates' variance.

Shrinkage also affects the absolute value of the estimates of teachers with hard-to-predict students. Column 5 of presents results from teacher-level regressions of the absolute value of teachers' preshrinkage value-added estimates on student characteristics and teachers' number of student-equivalents. In elementary school, the preshrinkage value-added estimates for teachers of students with lower pretest scores and teachers with more students eligible for free lunch are significantly further from (and with limited English proficiency closer to) the overall teacher mean relative to the preshrinkage value-added estimates of other teachers.Footnote16 In middle school, student characteristics do not have a significant relationship with the absolute value of the preshrinkage value-added estimates, possibly because we lack the power to precisely estimate these relationships. In column 6, we examine the relationship between the absolute value of teachers' postshrinkage value-added estimates, student characteristics, and teachers' number of student-equivalents. As before, student characteristics and teachers' number of student-equivalents still have statistically significant relationships with the absolute values of the shrinkage estimates, but these relationships are weaker than their relationships with the preshrinkage absolute values were.

We next assess how shrinkage affects the probability that teachers are assigned positive or negative consequences. We focus on the extremes of the distribution because teachers with such estimates are the most likely to be targeted for rewards or sanctions.Footnote17 In particular, we examine the relationship between student characteristics and indicator variables for whether the estimates fell into the top or bottom deciles of their distributions.Footnote18 We presume that districts “grade on a curve”—that is, they set thresholds for categories of effectiveness based on teachers' positions in the value-added distribution.Footnote19 By using the same relative thresholds of effectiveness for pre- and postshrinkage estimates, we use different absolute thresholds as measured in standard deviations of teacher value added. Examining value-added estimates in a relative rather than absolute sense also affects how to think about how estimates change after shrinkage. In an absolute sense, all estimates move toward the center of the distribution, with less precise estimates moving more. In relative terms, however, relatively imprecise initial estimates will move toward the center, and relatively precise estimates will move toward the extremes only if they change places in a rank order with imprecise initial estimates.

We present these results in . In column 1, using a linear probability model, we regress an indicator variable for the preshrinkage estimate falling in either the top or bottom deciles of the preshrinkage value-added distribution on each student characteristic and on teachers' number of student-equivalents (Equation Equation(10)). For elementary school, we find that having students with low pretest scores, students eligible for free lunch, or smaller numbers of students increases the probability that teachers' preshrinkage estimates are in either the top or bottom deciles of the distribution. In middle school, there is no relationship between the characteristics of teachers' students and the probability that the estimates fall in the top or bottom deciles. Column 2 examines the probability that the postshrinkage estimate is either in the top or bottom deciles of the postshrinkage value-added distribution and column 3 tests whether the difference between the pre- and postshrinkage relationships is statistically significant. Of 57 elementary school teachers in the top or bottom decile before applying shrinkage, there were 2 who moved in out of the extremes of the distribution after applying shrinkage. For middle school teachers, 5 of 32 teachers did likewise. We find that shrinkage has little effect on the probability that teachers of particular student groups are classified in the top or bottom deciles of the value-added distribution.

Table 3. Effects of student characteristics on distribution of teacher value-added estimates.

Columns 4 through 6 repeat the same analysis using an indicator variable for the estimate falling in either the top or bottom quintiles of the distribution. Because the top and bottom quintiles cover a much larger range of teacher effectiveness, there are fewer significant relationships between student characteristics and the probability of the estimate's being classified in either the top or bottom quintile, though the magnitude of some of the relationships appears to have increased. Comparing movement of teachers in and out of the top and bottom quintiles, 7 of 119 elementary school teachers switched places, as did 3 of 73 middle school teachers. However, shrinkage does not significantly affect the probability that teachers of hard-to-predict students are classified in either the top or bottom quintiles of the value-added distribution.Footnote20

6. Conclusion

Recent work has raised concerns about the effect of heteroscedasticity on estimates of value added because heteroscedasticity can reduce the precision of value-added estimates for teachers of particular types of students (e.g., Stacy et al. Citation2012). Because shrinkage adjusts value-added estimates by their level of precision, it might be viewed as a method to mitigate heteroscedasticity. This article empirically investigates how shrinkage affects the relationship between student characteristics and the precision of value-added estimates, the absolute value of those estimates, and the probability that the estimates fall in the extremes of the value-added distribution. We find three main results. First, consistent with heteroscedasticity, we find that particular student characteristics associated with disadvantage—lower pretest scores and eligibility for free lunch—are correlated with the precision of a teacher's estimate. Second, we show that shrinking the estimates improves the precision of the adjusted estimates and reduces the absolute value of estimates for teachers with hard-to-predict students. Third, in these data, shrinking has no statistically significant impact on the relative probability of exceeding a threshold for teachers of hard-to-predict students.

The first two results are predicted by theory, while the third result is puzzling. One piece of this puzzle may be that we assume that the accountability thresholds are rescaled along with the estimates. So, while shrinkage moves all teachers toward the middle of the distribution, the thresholds also adjust inward as shrinkage reduces the variance of teacher estimates. One must use caution in generalizing the result that, essentially, the accountability thresholds move in as quickly as teachers of disadvantaged students are pulled toward the middle. In this investigation, we have focused on data from a single year and district. An analysis using more years and districts would reveal whether this result is an anomaly or part of a broader pattern. In the meantime, given the conceptual appeal of shrinkage, the relative simplicity of implementation as a postestimation step, and the importance of this procedure in protecting teachers with relatively few students from being placed in the extremes of the distribution by chance, we recommend its use as part of any value-added model used for teacher accountability.

Acknowledgments

We thank the Office of the State Superintendent of Education of the District of Columbia and the District of Columbia Public Schools for providing the data for this study. We are grateful to Duncan Chaplin, Alexandra Resch, Cassandra Guarino, and two anonymous referees for their helpful comments. Emma Kopa, assisted by Juha Sohlberg, Raúl Torres, and Maureen Higgins, provided excellent programming support.

Notes

1 A number of studies have examined the extent to which nonrandom assignment of students to teachers causes bias in teacher value-added estimates and teachers to be misclassified (e.g., Aaronson, Barrow, and Sander Citation2007; Kane and Staiger Citation2008; Rothstein Citation2010; Kane et al. Citation2013; Chetty, Friedman, and Rockoff Citation2014; Guarino, Reckase, and Wooldridge Citation2015b). While these studies tend to focus on the point estimates, this article examines the estimates' variances.

2 The shrinkage estimates are called the best linear unbiased predictors (BLUPs) because they minimize the expected mean squared error (MSE) between teachers' estimates and their actual contributions. However, critics of the approach note that it obtains these properties by intentionally introducing statistical bias in the estimates and does not generally provide an accurate ranking of teachers (Tate Citation2004).

3 Kane and Staiger (Citation2002) found that small schools were more likely to have large changes in achievement across years than large schools.

4 Shrinkage could also differentially reduce the probability of consequences for teachers with greater variation in their true impacts on students. Variation in true impacts cannot be disentangled from variation due to random shocks to student performance, and the shrinkage procedure errs on the side of attributing imprecision to random variation rather than to variation in true impacts.

5 Teacher effects can be estimated as random variables that are independent of other factors affecting test scores or as fixed effects that might be correlated with the other factors affecting test scores. We use the latter method, which estimates teacher effects as fixed effects, to account for the possibility that teacher assignments are correlated with the covariates included in the model.

6 Standardized tests are designed to differentiate students who are proficient from those who are not, so they provide more reliable scores for students around the level of proficiency. Koedel, Leatherman, and Parsons (Citation2012) developed a method for accounting for measurement error that varies across the achievement distribution.

7 In Morris (1983), the EB estimate does not exactly equal the precision-weighted average of the two values due to a correction for bias. This adjustment increases the weight on the overall mean by (K – 3)/(K – 1), where K is the number of teachers. For ease of exposition, we have omitted this correction from the description given here and consider this expression to hold with equality.

8 Morris (Citation1983) and others refer to EB estimates as unbiased. For example, the EB random effects estimator is often called the BLUP. The EB estimates are unbiased only in the sense that the expectation of the mean over all groups (teachers) gives an unbiased estimate of the true mean over the population of group effects. As a direct consequence of Equation Equation(3), a particular teacher's EB estimate is not generally an unbiased estimate of that teacher's true effect.

9 The actual formula we employ for the variance of the EB estimates is based on a formula in Morris (Citation1983) that accounts for sampling error in the estimated variances and makes degrees of freedom corrections. For ease of exposition, we have omitted these corrections from the description given here and consider this expression to hold with equality.

10 We found nine technical reports on VAMs that are used for teacher evaluation. They describe the VAMs that have been used in Baltimore (American Institutes for Research Citation2013); Charleston (Resch and Deutsch Citation2015); DCPS (Isenberg and Walsh Citation2014); Florida (American Institutes for Research Citation2013); Hillsborough County, Florida (Value-Added Research Center Citation2014); Louisiana (Louisiana's VAA Model Citation2013); New York City (Value-Added Research Center Citation2010); Oklahoma (Walsh, Liu, and Dotter Citation2015); and Pittsburgh (Rotz, Johnson, and Gill Citation2014). All incorporate empirical Bayes shrinkage. Of this group, Baltimore, Florida, and Louisiana use random effects and the rest used fixed effects followed by a post-regression step for shrinkage.

11 To estimate the effectiveness of teachers who share students, we use a technique called the full roster plus method, which attributes equal credit to teachers of shared students. In this method, each student contributes one observation to the model for each teacher to whom he or she is linked and students are weighted based on the dosage they contribute (Hock and Isenberg Citation2012; Isenberg and Walsh Citation2015). Then we create additional observations to equalize the dosage that each student contributes to the model. The proportion of dosage each student contributes to a teacher remains unchanged because the additional observations are linked to a distinct set of so-called shadow teacher indicators. Coefficient estimates for these shadow teachers are discarded.

12 We do not examine higher-order polynomials for pretest scores.

13 To determine whether students with very high or very low pretest scores have larger residuals, as would be expected from measurement error in the tests, we also examined the relationship between the residuals and the absolute value of the pretest scores, an indicator for having a below-mean pretest score, and an interaction between the absolute value of the pretest score and having a below-mean pretest score (results not shown). Although having a pretest score at either extreme of the distribution is associated with larger residuals, residuals are significantly higher for students at the bottom of the pretest distribution than for those at the top.

14 We use the natural log of the variance as the dependent variable in these specifications for two reasons. First, the natural log of the variance more closely meets the normality assumption for efficient ordinary least squares (OLS) estimates. Second, the specification reflects the theoretical relationship between the variance and observable characteristics: ln(V^t)=ln(σ^t2)-ln(Nt)=ln(α+βXt+εt)-ln(Nt), where V^t is the estimated variance and Nt is the number of student equivalents.

15 The relationship coefficient on the natural log of the number of student equivalents is not −1, as would be expected from the equation in the previous footnote. This is likely due to correlations between class size and other student characteristics that are correlated with teachers' variances.

16 The absolute value is measured relative to the overall teacher mean, rather than the mean conditional on the characteristics of teachers' students. The preshrinkage estimates could be related to the characteristics of teachers' students if teachers are not equitably distributed. The results in columns 5 through 7 of do not adjust for correlated teacher assignments. However, shrinkage is the only difference between the value-added estimates used in columns 5 and 6, so the difference in the relationships between student characteristics and the absolute values of the value-added estimates (column 7) can be attributed to shrinkage.

17 The probability that teachers in the top or bottom decile receive consequences depends on the weight that value-added estimates receive in the overall evaluation, the correlation between value-added estimates and other evaluation components, and the placement of the thresholds.

18 As a sensitivity analysis, we also replace the top and bottom deciles by the top and bottom quintiles.

19 In a typical teacher evaluation system, like the system that was in place in DCPS, this does not force a certain number of teachers to have positive consequences and others to have negative consequences because value-added estimates are used in conjunction with rigorous teacher observations and other measures to create a final evaluation score for each teacher.

20 As a sensitivity analysis, we estimated Equation Equation(10) using a logit model in place of a linear probability model. The direction and statistical significance of coefficients reported in columns 1, 2, 4, and 5 of are nearly identical (not shown). The only difference is for the coefficient for the number of student-equivalents. These coefficients are just under the threshold for statistical significance in the elementary grade span for the top/bottom decile specification using a linear probability model. By contrast, they are not statistically significant when using a logit model.

References

  • Aaronson, D., Barrow, L., and Sander, W. (2007), “Teachers and Student Achievement in the Chicago Public High Schools,” Journal of Labor Economics, 25, 95–135.
  • American Institutes for Research (2013), “Florida Comprehensive Assessment Test (FCAT) 2.0 Value-Added Model: Technical Report 2012–13,” Washington, DC: American Institutes for Research.
  • American Institutes for Research (2014), “Baltimore City Public Schools Value-Added Model Technical Report,” Washington, DC: American Institutes for Research.
  • Arellano, M. (1987), “Computing Robust Standard Errors for Within-Groups Estimators,” Oxford Bulletin of Economics and Statistics, 49, 431–434.
  • Buonaccorsi, J. P. (2010), Measurement Error: Models, Methods, and Applications, Boca Raton, FL: Chapman & Hall/ CRC.
  • Chetty, R., Friedman, J., and Rockoff, J. (2014), “Measuring the Impacts of Teachers I: Evaluating Bias in Teacher Value-Added Estimates,” American Economic Review, 104, 2593–2632.
  • Guarino, C. M., Maxfield, M., Reckase, M. D., Thompson, P., and Wooldridge, J. M. (2015a), “An Evaluation of Empirical Bayes' Estimation of Value-Added Teacher Performance Measures,” Journal of Educational and Behavioral Statistics, 40, 190–222.
  • Guarino, C. M., Reckase, M. D., and Wooldridge, J. M. (2015b), “Can Value-Added Measures of Teacher Performance be Trusted?” Education Finance and Policy, 10, 117–156.
  • Hock, H., and Isenberg, E. (2012), Methods for Accounting for Co-Teaching in Value-Added Models, Working Paper No. 6, Washington, DC: Mathematica Policy Research.
  • Isenberg, E., and Hock, H. (2012), “Measuring School and Teacher Value Added in DC, 2011–2012 School Year,” Final Report Submitted to the Office of the State Superintendent of Education for the District of Columbia and the District of Columbia Public Schools, Washington, DC: Mathematica Policy Research.
  • Isenberg, E., and Walsh, E. (2014), “Measuring Teacher Value Added in DC, 2013-2014 School Year,” Final Report Submitted to the Office of the State Superintendent of Education for the District of Columbia and the District of Columbia Public Schools, Washington, DC: Mathematica Policy Research.
  • Isenberg, E., and Walsh, E. (2015), “Accounting for Co-Teaching: A Guide for Policymakers and Developers of Value-Added Models,” Journal of Research on Educational Effectiveness, 8, 112–119.
  • Kane, T. J., and Staiger, D. O. (2002), “The Promise and Pitfalls of Using Imprecise School Accountability Measures,” Journal of Economic Perspectives, 16, 91–114.
  • Kane, T. J., and Staiger, D. O. (2008), Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation, Working Paper No. 14607, Cambridge, MA: National Bureau of Economic Research.
  • Kane, T., McCaffrey, D., Miller, T., and Staiger, D. (2013), Have we Identified Effective Teachers? Validating Measures of Effective Teaching Using Random Assignment, Seattle, WA: Bill & Melinda Gates Foundation.
  • Koedel, C., Leatherman, R., and Parsons, E. (2012), Test Measurement Error and Inference from Value-Added Models, unpublished manuscript, Columbia, MO: University of Missouri.
  • Liang, K.-Y., and Zeger, S. L. (1986), “Longitudinal Data Analysis Using Generalized Linear Models,” Biometrika, 73, 13–22.
  • Louisiana's Value-Added Assessment Model as Specified in Act 54 (2013), “A Report to the Board of Elementary and Secondary Education, September 2013. Available at http://www.louisianabelieves.com/docs/teaching/2012 - 2013 - value - added - report.pdf?sfvrsn=8. Accessed March 31, 2016.
  • Mihaly, K., McCaffrey, D. F., Staiger, D. O., and Lockwood, J. R. (2013), A Composite Estimator of Effective Teaching, Seattle, WA: Bill & Melinda Gates Foundation.
  • Morris, C. N. (1983), “Parametric Empirical Bayes Inference: Theory and Applications,” Journal of the American Statistical Association, 78, 47–55.
  • Resch, A., and Deutsch, J. (2015), “Measuring School and Teacher Value Added in Charleston County School District, 2014–2015 School Year,” Final report submitted to the Charleston County School District. Washington, DC: Mathematica Policy Research.(Ce: Plz check this ref.)
  • Resch, A., and Isenberg. E. (2014), “How Do Test Scores at the Floor and Ceiling Affect Value-Added Estimates?” Working paper 33. Washington, DC: Mathematica Policy Research.
  • Rothstein, J. (2010), “Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement,” Quarterly Journal of Economics, 125, 175–214.
  • Rotz, D., Johnson, M., and Gill, B. (2014), “Value-Added Models for the Pittsburgh Public Schools, 2012-13 School Year,” Cambridge, MA: Mathematica Policy Research.
  • Stacy, B., Guarino, C., Reckase, M., and Wooldridge, J. (2012), “Does the Precision and Stability of Value-Added Estimates of Teacher Performance Depend on the Types of Students They Serve?” preliminary draft, East Lansing, MI: Education Policy Center at Michigan State University.
  • Tate, R. L. (2004), “A Cautionary Note on Shrinkage Estimates of School and Teacher Effects,” Florida Journal of Educational Research, 42, 1–21.
  • Value-Added Research Center (2010), NYC Teacher Data Initiative: Technical Report on the NYC Value-Added Model 2010, Madison, WI: Value-Added Research Center at Wisconsin Center for Education Research, University of Wisconsin-Madison.
  • ——— (2014), Hillsborough County Public Schools Value-Added Project: Year 3 Technical Report HCPS Value-Added Models 2013, Madison, WI: Value-Added Research Center at Wisconsin Center for Education Research, University of Wisconsin-Madison.
  • Walsh, E., Liu, A. Y., and Dotter, D. (2015), Measuring Teacher and School Value Added in Oklahoma, 2013–2014 School Year, final report submitted to the Office of Educator Effectiveness at the Oklahoma S.0.tate Department of Education. Washington, DC: Mathematica Policy Research.