3,517
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

Value-Added Models (VAMs): Caveat Emptor

Pages 1-9 | Received 01 Jan 2015, Accepted 01 Mar 2016, Published online: 26 May 2016

References

  • American Educational Research Association (AERA) (2015), “AERA Statement on Use of Value-Added Models (VAM) for the Evaluation of Educators and Educator Preparation Programs,” available at http://edr.sagepub.com/content/early/2015/11/10/0013189´15618385.full.pdf+html.
  • American Statistical Association (ASA) (2014), “ASA Statement on Using Value-Added Models for Educational Assessment,” available at http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf.
  • Amrein-Beardsley, A. (2008), “Methodological Concerns About the Education Value-Added Assessment System (EVAAS),” Educational Researcher, 37, 65–75.
  • Baker, E., Barton, P., Darling-Hammond, L., Haertel, E., Ladd, H., Linn, R., Ravitch, D., Rothstein, R., Shavelson, R., and Shepard, L. (2010), Problems With the Use of Student Test Scores to Evaluate Teachers, Washington, DC: Economic Policy Institute. Available at http://www.epi.org/publications/entry/bp278
  • Ballou, D., Sanders, W. L., and Wright, P. (2004), “Controlling for Student Background in Value- Added Assessment of Teachers,” Journal of Educational and Behavioral Statistics, 29, 37–65.
  • Berliner, D. C. (2013), “Effects of Inequality and Poverty vs. Teachers and Schooling on America’s Youth,” Teachers College Record, 115. Available at http://www.tcrecord.org/Content.asp?ContentID=16889
  • ——— (2014), “Exogenous Variables and Value-Added Assessments: A Fatal Flaw,” Teachers College Record, 116. Available at http://www.tcrecord.org/Content.asp?ContentId=17293
  • Betebenner, D. W. (2009b), “Norm- and Criterion-Referenced Student Growth,” Education Measurement: Issues and Practice, 28, 42–51.
  • Braun, H. I. (2008), “Viccissitudes of the Validators,” in Presentation made at the 2008 Reidy Interactive Lecture Series, Portsmouth, NH. Available at http://www.cde.state.co.us/cdedocs/OPP/ HenryBraunLectureReidy2008.ppt
  • Briggs, D., and Domingue, B. (2011), Due Diligence and the Evaluation of Teachers: A Review of the Value-Added Analysis Underlying theEffectiveness Rankings of Los Angeles Unified School District Teachers by the Los Angeles Times, Boulder, CO: National Education Policy Center. Available at http://nepc.colorado.edu/publication/due-diligence.
  • ——— (2013), “The Gains From Vertical Scaling,” Journal of Educational and Behavioral Statistics, 38, 551–576.
  • Briggs, D., and Weeks, J. (2011), “The Persistence of School-Level Value-Added,” Journal of Educational and Behavioral Statistics, 36, 616–637.
  • Broatch, J., and Lohr, S. (2012), “Multidimensional Assessment of Value Added by Teachers to Real-World Outcomes,” Journal of Educational and Behavioral Statistics, 37, 256–277.
  • Castellano, K., Rabe-Hesketh, S., and Skrondal, A. (2013), “Composition, Context, and Endogeneity in School and Teacher Comparisons,” Journal of Educational and Behavioral Statistics, 39, 333–367.
  • Chetty, R., Friedman, J. N., and Rockoff, J. E. (2014a), “Discussion of the American Statistical Association’s Statement (2014) on Using Value-Added Models for Educational Assessment,” Statistics and Public Policy, 1, 111–113. Available at http://amstat.tandfonline.com/doi/pdf/10.1080/2330443X.2014.955227.
  • ——— (2014b), “Measuring the Impact of Teachers I: Evaluating Bias in Teacher Value-Added Estimates,” American Economic Review, 104, 2593–2632.
  • ——— (2014c), “Measuring the Impact of Teachers II: Teacher Value-Added and Student Outcomes in Adulthood,” American Economic Review, 104, 2633–2679.
  • ——— (2016), “Using Prior Test Scores to Access the Validity of Value-Added Models,” Paper presented at the ASSA meetings 2016, San-Francisco, CA.
  • Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, F., Mood, A. M., Weinfeld, F. D., and York, R.L. (1996), Equality of Educational Opportunity, Washington, DC: U.S. Government Printing Office.
  • Collins, C., and Amrein-Beardsley, A. (2014), “Putting Growth and Value-Added Models on the Map: A National Overview,” Teachers College Record, 16. Available at http://www.tcrecord.org/Content.asp?ContentId=17291.
  • Corcoran, S. (2010), Can Teachers be Evaluated by Their Students’ Test Scores? Should They Be? The Use of Value Added Measures of Teacher Effectiveness in Policy and Practice, Educational Policy for Action Series. Providence, RI: Annenberg Institute for School Reform at Brown University. Available at http://files.eric.ed.gov/fulltext/ED522163.pdf.
  • Deming, W. E. (1994), The New Economics: For Industry, Government, Education, Cambridge, MA: Massachusetts Institute of Technology (MIT) Center for Advanced Educational Services.
  • Durso, C. S. (2011), An Analysis of the Use and Validity of Test-Based Teacher Evaluations Reported by the Los Angeles Times, Boulder, CO: National Education Policy Center. Available at http://nepc.colorado.edu/publication/analysis-la-times-2011.
  • Ehlert, M., Koedel, C., Parsons, E., and Podgursky, M. J. (2014), “The Sensitivity of Value-Added Estimates to Specification Adjustments: Evidence From School- and Teacher-Level Models in Missouri,” Statistics and Public Policy, 1, 19–27. Available at http://amstat.tandfonline.com/doi/pdf/10.1080/2330443X.2013.856152
  • Gabriel, R., and Lester, J. N. (2013), “Sentinels Guarding the Grail: Value-Added Measurement and the Quest for Education Reform,” Education Policy Analysis Archives, 21, 1–30. Available at http://epaa.asu.edu/ojs/article/view/1165.
  • Gill, B., Bruch, J., and Booker, K. (2013), Using Alternative Student Growth Measures for Evaluating Teacher Performance: What The Literature Says, Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Mid-Atlantic.
  • Gill, B., English, B., Furgeson, J., and McCullough, M. (2014). Alternative Student Growth Measures for TeacherEvaluation: Profiles of Early-Adopting Districts, Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Mid-Atlantic.
  • Goldhaber, D., Brewer, D., and Anderson, D. (1999), “A Three-Way Error Components Analysis of Educational Productivity,” Education Economics, 7, 199–208.
  • Goldhaber, D., and Chaplin, D. D. (2015), “Assessing the Rothstein Falsification Test: Does it Really Show Teacher Value-Added Models are Biased?" Journal of Research on Educational Effectiveness, 8, 8–34.
  • Goldhaber, D., and Hansen, M. (2013), “Is it Just a Bad Class? Assessing the Long-Term Stability of Estimated Teacher Performance,” Economica, 80, 589–612.
  • Goldhaber, D., Walch, J., and Gabele, B. (2012), “Does the Model Matter? Exploring the Relationships Between Different Student Achievement-Based Teacher Assessments,” Statistics and Public Policy, 1, 28–39.
  • Goldhaber, D. D., Goldschmidt, P., and Tseng, F. (2013), “Teacher Value-Added at the High-School Level: Different Models, Different Answers?" Educational Evaluation and Policy Analysis, 35, 220–236.
  • Good, T. L. (2014), “What Do We Know About How Teachers Influence Student Performance on Standardized Tests: And Why Do We Know So Little About Other Student Outcomes,” Teachers College Record, 116, 1–41.
  • Grossman, P., Cohen, J., Ronfeldt, M., and Brown, L. (2014), “The Test Matters: The Relationship Between Classroom Observation Scores and Teacher Value Added on Multiple Types of Assessment,” Educational Researcher, 43, 293–303.
  • Guarino, C. M., Maxfield, M., Reckase, M. D., Thompson, P., and Wooldridge, J..M. (2012), An Evaluation of Empirical Bayes’ Estimation of Value-Added Teacher Performance Measures, East Lansing, MI: Education Policy Center at Michigan State University. Available at http://www.aefpweb.org/sites/default/files/webform/empirical_bayes_20120301_AEFP.pdf
  • Hanges, P., Schneider, B., and Niles, K. (1990), “Stability of Performance: An Interactionist Perspective,” Journal of Applied Psychology, 75, 658–667.
  • Hansen, M., and Goldhaber, D. (2015), Response to AERA Statement on Value-Added Measures: Where are the Cautionary Statements on Alternative Measures? Washington, DC: The Brookings Institution. Available at http://www.brookings.edu/blogs/brown-center-chalkboard/posts/2015/11/19-aera-value-added-measures-hansen-goldhaber.
  • Hanushek, E., and Rivkin, S. (2010), “Generalizations About Using Value-Added Measures of Teacher Quality,” American Economic Review, 100, 267–271.
  • Harris, D. N. (2009), “Teacher Value-Added: Don’t End the Search Before It Starts,” Journal of Policy Analysis and Management, 28, 693–700.
  • ——— (2011), Value-Added Measures in Education: What Every Educator Needs to Know, Cambridge, MA: Harvard Education Press.
  • Harris, D. N., Ingle, W. K., and Rutledge, S. A. (2014), “How Teacher Evaluation Methods Matter for Accountability: A Comparative Analysis of Teacher Effectiveness Ratings by Principals and Teacher Value-Added Measures,” American Educational Research Journal, 51, 73–112.
  • Hill, H. C., Kapitula, L., and Umland, K. (2011), “A Validity Argument Approach to Evaluating Teacher Value-Added Scores,” American Educational Research Journal, 48, 794–831.
  • Holland, P. W. (1986), “Statistics and Causal Inference,” Journal of the American Statistical Association, 81, 945–960.
  • Isenberg, E., and Hock, H. (2012), Measuring School and Teacher Value Added in DC, 2011–2012 School Year, Washington, DC: Mathematica Policy Research. Available at http://www.learndc.org/sites/default/files/resources/MeasuringValue-AddedinDC2011-2012.pdf.
  • Isenberg, E., and Walsh, E. (2014), Measuring School and Teacher Value Added in DC, 2013–2014 School Year, Washington, DC: Mathematica Policy Research. Available at http://www.mathematica-mpr.com/∼/media/publications/PDFs/education/value-added_DC.pdf.
  • Johnson, S. M. (2015), “Will VAMs Reinforce the Walls of the Egg-Crate School?" Educational Researcher, 44, 117–126.
  • Jones, N. D., Buzick, H. M., and Turkan, S. (2013), “Including Students With Disabilities and English Learners in Measures of Educator Effectiveness,” Educational Researcher, 42, 234–241.
  • Kane, T., McCaffrey, D., Miller, T., and Staiger, D. (2013), Have We Identified Effective Teachers? Validating Measures of Effective Teaching Using Random Assignment, Seattle, WA: Bill and Melinda Gates Foundation. Available at http://www.metproject.org/downloads/MET_Validating_Using_Ran-dom_Assignment_Research_Paper.pdf
  • Kane, T., and Staiger, D. (2002), “The Promise and Pitfall of Using Imprecise School Accountability Measures,” Journal of Economic Perspectives, 16, 91–114.
  • Kelly, S., and Monczunski, L. (2007), “Overcoming the Volatility in School-Level Gain Scores: A New Approach to Identifying Value Added With Cross-Sectional Data,” Educational Researcher, 36, 279–287.
  • Koedel, C., Mihaly, K., and Rockoff, J. E. (2015), “Value-Added Modeling: A Review,” Economics of Education Review, 47, 180–195.
  • Konstantopoulos, S. (2014), “Teacher Effects, Value-Added Models, and Accountability,” Teachers College Record, 116, 1–21.
  • Kupermintz, H. (2003), “Teacher Effects and Teacher Effectiveness: A Validity Investigation of the Tennessee Value-Added Assessment System,” Educational Evaluation and Policy Analysis, 25, 287–298.
  • Lefgren, L., and Sims, D. (2012), “Using Subject Test Scores Efficiently to Predict Teacher Value-Added,” Educational Evaluation and Policy Analysis, 34, 109–121.
  • Lockwood, J. R., McCaffrey, D. F., Mariano, L. T., and Setodji, C. (2007), “Bayesian Methods for Scalable Multivariate Value-Added Assessment,” Journal of Educational and Behavioral Statistics, 32, 125–150.
  • Loeb, S., Soland, J., and Fox, J. (2014), “Is a Good Teacher a Good Teacher for All? Comparing Value-Added of Teachers With English Learners and Non-English Learners,” Educational Evaluation and Policy Analysis, 36, 457–475.
  • Lohr, S. (2012). “The Value Deming’s Ideas Can Add to Educational Evaluation,” Statistics, Politics, and Policy, 3, 1–40.
  • ——— (2014), “Red Beads and Profound Knowledge: Deming and Quality of Education,’’ in Deming Lecture presented at the Joint Statistical Meetings. Available at http://www.amstat.org/meetings/jsm/2014/program.cfm
  • ——— (2015), “Red Beads and Profound Knowledge: Deming and Quality of Education,” Education Policy Analysis Archives, 23, 80–95.
  • Martineau, J. (2006), “Distorting Value Added: The use of Longitudinal, Vertically Scaled Student Achievement Data for Growth-Based, Value-Added Accountability,” Journal of Educational and Behavioral Statistics, 31, 35–62.
  • McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., and Hamilton, L. S. (2004), “Models for Value-Added Modeling of Teacher Effects,” Journal of Educational and Behavioral Statistics, 29, 67–101.
  • McCaffrey, D. F., Sass, T. R., Lockwood, J. R., and Mihaly, K. (2009), “The Intertemporal Variability of Teacher Effect Estimates,” Education Finance and Policy, 4, 572–606.
  • Neal, D. (2013), “The Consequences of Using One Assessment System to Pursue Two Objectives,’’ Working Paper 19214, National Bureau of Economic Research (NBER), Cambridge, MA. Available at http://www.nber.org/papers/w19214.
  • Newton, X. A., Darling-Hammond, L., Haertel, E., and Thomas, E. (2010), “Value Added Modeling of Teacher Effectiveness: An Exploration of Stability Across Models and Contexts,” Educational Policy Analysis Archives, 18, 23–39. Available at epaa.asu.edu/ojs/article/view/810.
  • Papay, J. P. (2011), “Different Tests, Different Answers: The Stability of Teacher Value-Added Estimates Across Outcome Measures,” American Educational Research Journal, 48, 163–193.
  • Paufler, N. A., and Amrein-Beardsley, A. (2014), “The Random Assignment of Students Into Elementary Classrooms: Implications for Value-Added Analyses and Interpretations,” American Educational Research Journal, 51, 328–362. doi:10.3102/0002831213508299
  • Peterson, P. E. (2010), “Brookings, Baseball and Value Added Assessments of Teachers,” Education Next. Available at http://educationnext.org/brookings-baseball-and-value-added-assessments-of-teachers/.
  • Pivovarova, M., Broatch, J., and Amrein-Beardsley, A. (2014), “Chetty et al. on the American Statistical Association’s Recent Position Statement on Value-Added Models (VAMs): Five Points of Contention [Commentary],” Teachers College Record. Available at http://www.tcrecord.org/content.asp?contentid=17633.
  • Polikoff, M. S., and Porter, A. C. (2014), “Instructional Alignment as a Measure of Teaching Quality,” Educational Evaluation and Policy Analysis, 36, 399–416.
  • Raudenbush, S. W. (2004), “What are Value-Added Models Estimating and What Does This Imply for Statistical Practice?" Journal of Educational and Behavioral Statistics, 29, 121–129.
  • Raudenbush, S. W., and Jean, M. (2012), How Should Educators Interpret Value-Added Scores? Stanford, CA: Carnegie Knowledge Network. Available at http://www.carnegieknowledgenetwork.org/briefs/value-added/interpreting-value-added/.
  • Reardon, S. F., and Raudenbush, S. W. (2009), “Assumptions of Value-Added Models for Estimating School Effects,” Education Finance and Policy, 4, 492–519.
  • Reckase, M. D. (2004), “The Real World is More Complicated Than We Would Like,” Journal of Educational and Behavioral Statistics, 29, 117–120.
  • Richardson, W. (2012, September 27), “Do Parents Really Want More Than 200 Separate State-Mandated Assessments for Their Children?" Huffington Post. Available at http://www.huffingtonpost.com/will-richardson/do-parents-really-want-ov_b_1913704.html.
  • Rivkin, S., Hanushek, E., and Kain, J. (2005), “Teachers, Schools, and Academic Achievement,” Econometrica, 73, 417–458.
  • Robelen, E. W. (2012, January 9), “Yardsticks Vary by Nation in Calling Education to Account,” Education Week. Available at http://www.edweek.org/ew/articles/2012/01/12/16testing.h31.html?tkn=ZRXFgi-Q5krPVo%2FsHmf1v%2Bh33GqSq%2ByE1LBEQ&cmp=ENL-EU-NEWS1&intc=EW-QC12-ENL.
  • Rosenbaum, P., and Rubin, D. (1983), “The Central Role of the Propensity Score in Observational Studies for Causal Effects,” Biometrika, 17, 41–55.
  • Rothstein, J. (2009), “Student Sorting and Bias in Value-Added Estimation: Selection on Observables and Unobservables,” Education Finance and Policy, 4, 537–571.
  • ——— (2010), “Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement,” Quarterly Journal of Economics, 25, 175–214.
  • ——— (2014), Revisiting the Impacts of Teachers, Berkeley, CA: University of California-Berkeley. Available at http://eml.berkeley.edu/∼jrothst/workingpapers/rothstein_cfr.pdf.
  • Rothstein, J., and Mathis, W. J. (2013), Review of Two Culminating Reports From the MET Project, Boulder, CO: National Education Policy Center. Available at http://nepc.colorado.edu/thinktank/review-MET-final-2013.
  • Rubin, D. B. (1978), “Bayesian Inference for Causal Effects: The Role of Randomization,” The Annals of Statistics, 6, 34–58.
  • Rubin, D. B., Stuart, E. A., and Zanutto, E. L. (2004), “A Potential Outcomes View of Value-Added Assessment in Education,” Journal of Educational and Behavioral Statistics, 29, 103–116.
  • Sawchuck, S. (2015), “Teacher Evaluation Heads to the Courts,” Education Week. Available at http://www.edweek.org/ew/section/multimedia/teacher-evaluation-heads-to-the-courts.html.
  • Scherrer, J. (2011), “Measuring Teaching Using Value-Added Modeling: The Imperfect Panacea,” NASSP Bulletin, 95, 122–140.
  • Schochet, P. Z., and Chiang, H. S. (2010), Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains, Washington, DC: U.S. Department of Education. Available at http://ies.ed.gov/ncee/pubs/20104004/.
  • ——— (2013), “What are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?" Journal of Educational and Behavioral Statistics, 38, 142–171.
  • Taylor, K. (2015, November 25), “Cuomo, in Shift, is Said to Back Reducing Test Scores’ Role in Teacher Reviews,” The New York Times. Available at http://www.nytimes.com/2015/11/26/nyregion/cuomo-in-shift-is-said-to-back-reducing-test-scores-role-in-teacher-reviews.html
  • Tekwe, C. D., Carter, R. L., Ma, C., Algina, J., Lucas, M. E., Roth, J., Ariet, M., Fisher, T., and Resnick, M. B. (2004), “An Empirical Comparison of Statistical Models for Value-Added Assessment of School Performance,” Journal of Educational and Behavioral Statistics, 29, 11–36.
  • Wainer, H. (2004), “Introduction to a Special Issue of the Journal of Educational and Behavioral Statistics on Value-Added Assessment,” Journal of Educational and Behavioral Statistics, 29, 1–3.
  • ——— (2011), Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies, Princeton, NJ: Princeton University Press.
  • Walsh, E., and Isenberg, E. (2015), “How Does Value-Added Compare to Student Growth Percentiles?" Statistics and Public Policy, 2, 1–13.
  • Whitehurst, G. J. (2013), Teacher Value Added: Do We Want a Ten Percent Solution? Washington, DC: Brookings Institution.
  • Winters, M. A., and Cowen, J. M. (2013), “Who Would Stay, Who Would Be Dismissed? An Empirical Consideration of Value-Added Teacher Retention Policies,” Educational Researcher,42, 330–337.
  • Wright, S. P. (2010), An Investigation of Two Nonparametric Regression Models for Value-Added Assessment in Education, Cary, NC: SAS Institute, Inc.