389
Views
3
CrossRef citations to date
0
Altmetric
Articles

The Value of Efficiency Measures: Lessons from Workforce Development Programs

&

References

  • Anderson, K., & Raymond, J. (1993). The effect of creaming on placement rates under the Job Training Partnership Act. Industrial and Labor Relations Review, 46(4), 613–624.
  • Anderson, K., Burkhauser, R., Raymond, J., & Russell, C. (1992). Mixed signals in the Job Training Partnership Act. Growth and Change, 22(3), 32–48.
  • Barnow, B. S. (1996). Policies for people with disabilities in U.S. employment and training programs. In J. L. Mashaw V. Reno R. Burkhauser & M. Berkowitz (Eds.), Disabilities, cash benefits, and work (pp. 297–328). Kalamazoo, MI: Upjohn Institute for Employment Research.
  • Barnow, B. S. (2000). Exploring the relationship between performance management and program impact: A case study of the Job Training Partnership Act. Journal of Policy Analysis and Management, 19(1), 118–141.
  • Barnow, B.S., & Heinrich, C. (2010). One size fits all? The pros and cons of performance standard adjustments. Public Administration Review, 70(1), 60–71.
  • Barnow, B. S., & King, C. T. (2005). The Workforce Investment Act in eight states. Albany, NY: Nelson A. Rockefeller Institute of Government.
  • Barnow, B. S., & Smith, J. A. (2004). Performance management of U.S. job training programs: Lessons from the Job Training Partnership Act. Public Finance and Management, 4(3), 247–287.
  • Blalock, A., & Barnow, B. 2001. Is the new obsession with “performance management” masking the truth about social programs? In D. Forsythe (Ed.), Quicker, better, cheaper: Managing performance in American government (pp. 485–517). Albany, NY: Rockefeller Institute Press.
  • Bloom, H. S., Michalopoulos, C., Hill, C. J., & Lei, Y. (2002). Can nonexperimental comparison group methods match the findings from a random assignment evaluation of mandatory welfare-to-work programs?. New York: Manpower Demonstration Research Corporation.
  • Boardman, A., Greenberg, D., Vining, A., & Weimer, D. (2010). Cost-benefit analysis (4th ed.). Upper Saddle River, NJ: Prentice Hall.
  • Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates. Journal of Policy Analysis and Management, 27(4), 724–750.
  • Courty, P., & Marschke, G. (1996). Moral hazard under incentive systems: The case of a federal bureaucracy. In G. Libecap (Ed.) Advances in the study of entrepreneurship, innovation and economic growth (vol. 7, pp. 157–190). Greenwich, CT: JAI Press.
  • Courty, P., & Marschke, G. (1997). Measuring government performance: Lessons from a federal job-training program. American Economic Review, 87(2), 383–388.
  • Courty, P., & Marschke, G. (2004). An empirical investigation of gaming responses to explicit performance incentives. Journal of Labor Economics, 22(1), 22–56.
  • Courty, P., & Marschke, G. (2007). Making government accountable: Lessons from a federal job training program. Public Administration Review, 67(5), 904–916.
  • Dickinson, K., West, R. W., Kogan, D. J., Drury, D. A., Franks, M. S., Schlictmann, L., & Vencil, M. (1988). Evaluation of the effects of JTPA performance standards on clients, services, and costs. Report No. 88–15. Washington, DC: National Commission for Employment Policy Research.
  • Geys, B., & Moesen, W. (2009). Measuring local government technical (in)efficiency: An application and comparison of FDH, DEA, and econometric approaches. Public Performance & Management Review, 32(4), 499–513.
  • Hatry, H. P. (2007). Performance measurement: Getting results. Washington, DC: Urban Institute Press.
  • Heckman, J., & Smith, J. (2004). The determinants of participation in a social program: Evidence from JTPA. Journal of Labor Economics, 22(2), 243–298.
  • Heckman, J., Heinrich, C., & Smith, J. (2002). The performance of performance standards. Journal of Human Resources, 37(4), 778–811.
  • Heckman, J., Smith, J., & Taber, C. (1996). What do bureaucrats do? The effects of performance standards and bureaucratic preferences on acceptance into the JTPA program. In G. Libecap (Ed.), Advances in the study of entrepreneurship, innovation and economic growth (vol. 7, pp. 191–218). Greenwich, CT: JAI Press.
  • Marschke, G. (2002). Performance incentives and bureaucratic behavior: Evidence from a federal bureaucracy. Working paper. Albany: Department of Economics, State University of New York at Albany.
  • Moore, R., Gorman, P., & Wilson, A. (2007). California one-stop system cost study report. Prepared for California Workforce Investment Board by California State University—Northridge.
  • Phillips, J. J. (2003). Return on investment in training and performance improvement programs (2nd ed.). New York: Butterworth Heinemann.
  • Poister, T. H. (2010). Performance measurement: Monitoring program outcomes. In J. S. Wholey H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (3rd ed., pp. 100–125). San Francisco: Jossey-Bass.
  • Poister, T. H., & Streib, G. (1999). Performance measurement in municipal government: Assessing the state of the practice. Public Administration Review, 59(4), 325–335.
  • Radin, B. (2006). Challenging the performance movement. Washington, DC: Georgetown University Press.
  • Rubenstein, R., Stiefel, L., & Schwartz, A. E. (2003). Better than raw: A guide to measuring organizational performance with adjusted performance measures. Public Administration Review, 63(5), 607–615.
  • Schochet, P. Z., & Burghardt, J. (2008). Do Job Corps performance measures track program impacts? Journal of Policy Analysis and Management, 27(3), 556–576.
  • Schochet, P. Z., Burghardt, J., & McConnell, S. (2006). National Job Corps study and longer-term follow-up study: Impact and benefit-cost findings using survey and summary earnings records data. Princeton, NJ: Mathematica Policy Research.
  • Schultz, T. W. (1961). Investment in human capital. American Economic Review, 51(1), 1–17.
  • Smith, J., & Todd, P. (2005). Does matching overcome LaLonde’s critique of nonexperimental estimators? Journal of Econometrics, 125, 305–353.
  • Social Policy Research Associates. (1999). Guide to JTPA performance standards for program years 1998 and 1999. Menlo Park, CA: Social Policy Research Associates.
  • Social Policy Research Associates. (2004). The Workforce Investment Act after five years: Results from the national evaluation of the implementation of WIA. Oakland, CA: Social Policy Research Associates.
  • Trutko, J., & Barnow, B. S. (2010). Implementing efficiency measures for employment and training programs: Final report. Occasional Paper ETAOP 2010–05. Washington, DC: U.S. Department of Labor, Employment and Training Administration.
  • U.S. Department of Labor, Employment, & Training Administration. (2001). Summary report on WIA implementation. Washington, DC: U.S. Department of Labor, Employment and Training Administration.
  • U.S. Department of Labor, Employment and Training Administration. (2002). Training and Employment Guidance Letter No. 11–01. Guidance on revising Workforce Investment Act (WIA) state negotiated levels of performance. Washington, DC: U.S. Department of Labor, Employment and Training Administration.
  • U.S. Department of Labor, Employment, & Training Administration. (2007). Training and Employment Guidance Letter No. 19–06. Negotiating performance goals for the Workforce Investment Act Title 1B programs and Wagner-Peyser Act program for program years 2007 and 2008. Washington, DC: U.S. Department of Labor, Employment and Training Administration.
  • U.S. Department of Labor, Office of Job Corps. (2007). Policy and requirements handbook: Job Corps. Washington, DC: U.S. Department of Labor, Office of Job Corps.
  • U.S. General Accounting Office. (1998). Glossary: Performance measurement and evaluation—Definitions and relationships. Washington, DC: U.S. General Accounting Office. GAO/GGD-98-26. Available at http://www.gao.gov/special.pubs/gg98026.pdf, accessed.
  • U.S. General Accounting Office. (2002). Workforce Investment Act: Improvements needed in performance measures to provide a more accurate picture of WIA’s effectiveness. Washington, DC: U.S. General Accounting Office.
  • U.S. Government Accountability Office. (2004a). Federal budget: Agency obligations by budget function and object classification for fiscal year 2003. GAO-04–834.
  • U.S. Government Accountability Office. (2004b). Principles of federal appropriations law. 3rd ed., vol. 1. Washington, DC: U.S. Government Accountability Office Report GAO-04-26.
  • Wholey, J. S., & Hatry, H. P. (1992). The case for performance monitoring. Public Administration Review, 52(6), 604–610.
  • Wilde, E. T., & Hollister, R. (2007). How close is close enough? Testing nonexperimental estimates of impact against experimental estimates of impact with education test scores as outcomes. Journal of Policy Analysis and Management, 26(3), 455–477.
  • Workforce Enterprise Services. (2007). Chicago Workforce Board cost per participant study final report: Cost per participant comparison study—Phase two final report.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.