1,012
Views
6
CrossRef citations to date
0
Altmetric
Articles

Is There a Magnet-School Effect? A Multisite Study of MSAP-Funded Magnet Schools

, &
Pages 77-99 | Received 10 Feb 2016, Accepted 27 Jan 2017, Published online: 17 Mar 2017

References

  • Ballou, D. (2007). Magnet schools and peers: Effects on mathematics achievement. Nashville, TN: Vanderbilt University.
  • Ballou, D. (2009). Magnet school outcomes. In M. Berends, M. G. Springer, D. Ballou, & H. J. Walberg (Eds.), Handbook of research on school choice (pp. 409–426). New York, NY: Routledge.
  • Bang, H., & Robins, J. M. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics, 61, 962–973. doi: 10.1111/j.1541-0420.2005.00377.x
  • Betts, J., Rice, L., Zau, A., Tang, E., & Koedel, C. (2006). Does school choice work? Effects on student integration and academic achievement. Los Angeles, CA: Public Policy Institute of California.
  • Betts, J. R., & Tang, Y. E. (2011). The effect of charter schools on student achievement: A meta-analysis of the literature. Bothell, WA: National Charter School Research Project, Center on Reinventing Public Education.
  • Biesta, G. (2007). Why “what works” won't work: Evidence-based practice and the democratic deficit in educational research. Educational Theory, 57, 1–22.
  • Bifulco, R., Cobb, C., & Bell, C. (2009). Can interdistrict choice boost student achievement? The case of Connecticut's interdistrict magnet school program. Educational Evaluation and Policy Analysis, 31, 323–345. doi: 10.3102/0162373709340917
  • Blank, R., & Archbald, D. (1992). Magnet schools and educational quality. Clearinghouse, 66, 81–87.
  • Blank, R., Dentler, R., Baltzell, C., & Chabotar, K. (1983). Survey of magnet schools: Analyzing a model for quality integrated education. Washington, DC: U.S. Department of Education.
  • Borenstein, M., Hedges, L., & Rothstein, H. (2007). Meta-analysis: Fixed effect vs. random effects. Retrieved from http://www.Meta-Analysis.com
  • Brown v. Board of Education, 347 U.S. 483 (1954).
  • Civil Rights Act of 1964, Pub. L. No. 88-352, 78 Stat. 241 (1964).
  • Cohen, D. K., & Hill, H. C. (2001). Learning policy: When state education reform works. New Haven, CT: Yale University Press.
  • Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27, 724–750. doi: 10.1002/9781444307399.ch8
  • Crain, R., Heebner, A., & Si, Y. (1992). The effectiveness of New York City's career magnet schools: An evaluation of ninth grade performance using an experimental design. Berkeley, CA: National Center for Research in Vocational Education.
  • Cullen, J. B., Jacob, B. A., & Levitt, S. (2003). The effect of school choice on student outcomes: Evidence from randomized lotteries. National Bureau of Economic Research Working Paper 10113. Cambridge, MA: NBER.
  • Deo, S. R. (2007). Where have all the Lovings gone: The continuing relevance of the movement for a multiracial category and racial classification after Parents Involved in Community Schools v. Seattle School District No. 1. Journal of Gender, Race & Justice, 11, 409–452.
  • Dickson, B. L., Pinchback, C. L., & Kennedy, R. L. (2000). Academic achievement and magnet schools. Research in the Schools, 7, 11–17.
  • Duflo, E. (2004). Scaling up and evaluation. In F. Bourguignon & B. Pleskouie (Eds.), Annual World Bank Conference on Development Economics 2004 (pp. 341–369). New York, NY: Oxford University Press.
  • Elementary and Secondary Education Act of 1965, Pub. L. No. 100-297, § 3001, 102 Stat. 231 (1988).
  • Extension and Revision of the Emergency School Aid Act, Pub. L. No. 94-482, § 321 (1976).
  • Fleming, N. (2012). Magnets adjust to new climate of school choice. Education Week, 31(30), 1–16.
  • Flemming, A., Freeman, F., Horn, S., Ruiz, M., Saltzman, M., & Nunez, L. (1979). School desegregation in Tacoma, Washington. A staff report of the United States Commission on Civil Rights. Washington, DC: U.S. Commission on Civil Rights. Retrieved from https://www.law.umaryland.edu/marshall/usccr/documents/cr12t11.pdf
  • Frankenberg, E., & Le, Q. C. (2008). The post-parents involved challenge: Confronting extralegal obstacles to integration. Ohio State Law Journal, 69, 1015–1072.
  • Frankenberg, E., & Siegel-Hawley, G. (2010). Choosing diversity: School choice and racial integration in the age of Obama. Stanford Journal of Civil Rights and Civil Liberties, 6, 219–252.
  • Frankenberg, E., Siegel-Hawley, G., & Orfield, G. (2008). The forgotten choice? Rethinking magnet schools in a changing landscape. Los Angeles, CA: Civil Rights Project/Proyecto Derechos Civiles, University of California, Los Angeles.
  • Gamoran, A. (1996). Student achievement in public magnet, public comprehensive and private city high schools. Educational Evaluation and Policy Analysis, 18, 1–18. doi: 10.2307/1164227
  • Glazerman, S., Levy, D. M., & Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. Annals of the American Academy of Political and Social Science, 589, 63–93. doi: 10.1177/0002716203254879
  • Goldring, E. B. (2009). Perspectives on magnet schools. In M. Berends, M. G. Springer, D. Ballou, & H. J. Walberg (Eds.), Handbook of research on school choice (pp. 361–378). New York, NY: Routledge.
  • Goldschmidt, P., & Martinez-Fernandez, J. (2004). The relationship between school quality and the probability of passing high-stakes performance assessments. Los Angeles, CA: Center for Research on Evaluation Standards and Student Testing, Technical Report 644.
  • Grooms, A. A., & Williams, S. M. (2015). The reversed role of magnets in St. Louis, implications for Black student outcomes. Urban Education, 50, 454–473. doi: 10.1177/0042085913516131
  • Heckman, J. J., Ichimura, H., & Todd, P. E. (1998). Matching as an econometric evaluation estimator. Review of Economic Studies, 65, 261–294. doi: 10.2307/2971733
  • Hedges, L. V. (1982). Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92, 490–499. doi: 10.1037//0033-2909.92.2.490
  • Hill, N. E., Castellino, D. R., Lansford, J. E., Nowlin, P., Dodge, K. A., Bates, J. E., & Pettit, G. S. (2004). Parent academic involvement as related to school behavior, achievement, and aspirations: Demographic variations across adolescence. Child Development, 75, 1491–1509. doi: 10.1111/j.1467-8624.2004.00753.x
  • Holland, P. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.
  • Hox, J. J. (1995). Applied multilevel analysis. Amsterdam, The Netherlands: TT-publikaties.
  • Hox, J. J. (2010). Multilevel analysis: Techniques and applications. New York, NY: Routledge.
  • Huber, M., Lechner, M., & Steinmayr, A. (2012). Radius matching on the propensity score with bias adjustment: Finite sample behavior, tuning parameters and software implementation. Paper for Swiss Institute for Empirical Economic Research, Sankt Gallen, Switzerland.
  • Huber, M., Lechner, M., & Wunsch, C. (2010). How to control for many covariates? Reliable estimators based on the propensity score ( Discussion paper 5268). Bonn, Germany: Institute for the Study of Labor.
  • Judson, E. (2014). Effects of transferring to STEM-focused charter and magnet schools on student achievement. Journal of Educational Research, 107, 255–266. doi: 10.1080/00220671.2013.823367
  • Kafer, K. (2012). A chronology of school choice in the U.S. Journal of School Choice, 3, 415–416. doi: 10.1080/15582150903489786
  • Kalaian, S. A. (2003). Meta-analysis methods for synthesizing treatment effects in multisite studies: Hierarchical linear modeling (HLM) perspective. Practical Assessment, Research & Evaluation, 8(15). Retrieved from http://PAREonline.net/getvn.asp?v=8&n=15
  • Kemple, J. J., & Scott-Clayton, J. (2004). Career academies: Impacts on labor market outcomes and educational attainment. New York, NY: Manpower Demonstration Research Corporation.
  • Kemple, J. J., & Snipes, J.C. (2000). Career academies: Impacts on students' engagement and performance in high school. New York, NY: Manpower Demonstration Research Corporation.
  • Larson, J. C., Witte, J. C., Staib, S. K., & Powell, M. I. (1993). A microscope on secondary magnet schools in Montgomery County, Maryland. In D. R. Waldrip, W. L. Marks, & N. Estes (Eds.), Magnet school policy studies and evaluations (pp. 261–435). Austin, TX: Morgan Printing.
  • McLaughlin, M. W. (1987). Learning from experience: Lessons from policy implementation. Educational Evaluation and Policy Analysis, 9, 171–178. doi: 10.2307/1163728
  • Morgan, S. L., & Winship, C. (2007). Counterfactuals and causal inference: Methods and principles for social research. New York, NY: Cambridge University Press.
  • National Alliance of Public Charter Schools. (2016). Facts about charters. Washington, DC: Author. Retrieved from http://www.publiccharters.org/get-the-facts/public-charter-schools/faqs/
  • Orfield, G., Frankenberg, E., & Garces, L. M. (2008). Statement of American social scientists of research on school desegregation to the U.S. Supreme Court in Parents v. Seattle School District and Meredith v. Jefferson County. Urban Review, 40, 96–136. doi: 10.1007/s11256-007-0073-7
  • Parents Involved in Community Schools v. Seattle School District No. 1, 551 U.S. 701 (2007).
  • Penta, M. (2001). Comparing student performance at program magnet, year-round magnet, and non-magnet elementary schools. Raleigh, NC: Wake County Public School System, Department of Evaluation and Research. Retrieved June 8, 2011 from http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED457178http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED457178
  • Price, J., & Stern, J. (1987). Magnet schools as a strategy for integration and school reform. Yale Law & Policy Review, 5, 291–321.
  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Newbury Park, CA: Sage.
  • Raudenbush, S. W., & Sadoff, S. (2008). Statistical inference when classroom quality is measured with error. Journal of Research on Educational Effectiveness, 1, 138–154.
  • Rhea, A., & Regan, R. (2007). Magnet program review. Raleigh, NC: Wake County Public School System, Evaluation and Research Department. Retrieved June 8, 2011, from http://www.aera.net/uploadedFiles/Divisions/School_Evaluation_and_Program_Development_(H)/Awards/Cat%203%20Nancy%20Baenen%20-%20magnet_review_07.pdf
  • Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55. doi: 10.1017/cbo9780511810725.016
  • Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. American Statistician, 39, 33–38. doi: 10.1017/cbo9780511810725.019
  • Rosenthal, R., & Rubin, D. B. (1982). Comparing effect sizes of independent studies. Psychological Bulletin, 92, 500–504. doi: 10.1037//0033-2909.92.2.500
  • Rossell, C. H. (1979). Magnet schools as a desegregation tool: The importance of contextual factors in explaining their success. Urban Education, 14, 303–320. doi: 10.1177/0042085979143003
  • Rossell, C. H. (2005). No longer famous but still intact. Education Next, 5(2), 44–49.
  • Rossell, C. H. (2009). Legal aspects in magnet schools. In M. Berends, M. G. Springer, D. Ballou, & H. J. Walberg (Eds.), Handbook of research on school choice (pp. 379–392). New York, NY: Routledge.
  • Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701. doi: 10.1037/h0037350
  • Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100, 322–331. doi: 10.1198/016214504000001880
  • Rubin, D. B., & Thomas, N. (1996). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52, 249–264. doi: 10.2307/2533160
  • Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W., & Shavelson, R. (2007). Estimating causal effects using experimental and observational designs: A think tank white paper. Washington, DC: American Educational Research Association.
  • Seever, M. (1993). Summative evaluation of the Knotts Environmental Science Magnet Elementary School. In D. R. Waldrip, W. L. Marks, & N. Estes (Eds.), Magnet school policy studies and evaluations (pp. 437–475). Austin, TX: Morgan Printing.
  • Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103(484), 1334–1344. doi: 10.1198/016214508000000733
  • Siegel-Hawley, G., & Frankenberg, E. (2012). Reviving magnet schools: Strengthening a successful choice option. A research brief. Los Angeles, CA: Civil Rights Project/Proyecto Derechos Civiles, University of California, Los Angeles.
  • Silver, D., Saunders, M., & Zarate, E. (2008). What factors predict high school graduation in the Los Angeles Unified School District? California Dropout Research Project Report #14, UCLA/IDEA and UC/ACCORD. Los Angeles, CA: University of California, Los Angeles.
  • Somers, M. A., Zhu, P., & Wong, E. (2011). Whether and how to use state tests to measure student achievement in a multi-state randomized experiment: An empirical assessment based on four recent evaluations (NCEE 2012-4015). Washington, DC: National Center for Education Evaluation and Regional Assistance.
  • Spillane, J. P. (2009). Standards deviation: How schools misunderstand education policy. Cambridge, MA: Harvard University Press.
  • Steel, L., & Levine, R. (1994). Educational innovations in multiracial contexts: The growth of magnet schools in American education. Palo Alto, CA: American Institute for Research in the Behavioral Sciences.
  • Stein, M. L., Berends, M., Fuchs, D., McMaster, K., Sáenz, L., Yen, L., & Compton, D. L. (2008). Scaling up an early reading program: Relationships among teacher support, fidelity of implementation, and student performance across different sites and years. Educational Evaluation and Policy Analysis, 30, 368–388. doi: 10.3102/0162373708322738
  • Stuart, E. A. (2007). Estimating causal effects using school-level data sets. Educational Researcher, 36(4), 187–198. doi: 10.3102/0013189x07303396
  • Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Science: A Review Journal of the Institute of Mathematical Statistics, 25, 1–21. doi: 10.1214/09-sts313
  • U.S. Department of Education. (2013). U.S. Department of Education awards $89.8 million in Magnet School Assistance Program grants. Retrieved from https://www.ed.gov/news/press-releases/us-department-education-awards-898-million-magnet-school-assistance-program-gran
  • U.S. Department of Education. (2017). Programs: Magnet schools assistance. Retrieved from https://www2.ed.gov/programs/magnet/index.html
  • Wang, J., & Herman, J. (2017). Magnet schools: History, description, and effects. In R. Fox & N. Buchanan (Eds.), Handbook of school choice (pp. 158–179). New York, NY: John Wiley and Sons.
  • What Works Clearinghouse. (2014). WWC procedures and standards handbook version 3.0. Washington, DC: What Works Clearinghouse, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v3_0_standards_handbook.pdf
  • Wilkinson III, J. H. (2007). Seattle and Louisville school cases: There is no other way. Harvard Law Review, 121, 158–183.
  • Winkelmayer, W. C., & Kurth, T. (2004). Propensity scores: Help or hype? Nephrology Dialysis Transplantation, 19(7), 1671–1673. doi: 10.1093/ndt/gfh104
  • Witte, J., & Walsh, D. (1990). A systematic test of the effective schools model. Educational Evaluation and Policy Analysis, 12, 188–213. doi: 10.2307/1163633
  • Yang, Y., Li, Y., & Tompkins, L. (2005, April). Paper presented at the Annual Meeting of the American Educational Research Association, Montreal, Canada. Retrieved June 8, 2011 from http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED490624http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED49062
  • Yin, R. K., Schmidt, R. J., & Besag, F. (2006). Aggregating student achievement trends across states with different tests: Using standardized slopes as effect sizes. Peabody Journal of Education, 81(2), 47–61. doi: 10.1207/s15327930pje8102_3

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.