207
Views
0
CrossRef citations to date
0
Altmetric
PART THREE: Welfare Services

U.S. Welfare-to-Work Programs: The Good, the Bad and the Ugly

&
Pages 1354-1379 | Published online: 13 Aug 2008
 

Abstract

Welfare policy in the United States has moved from passive transfer payments to ‘activating’ welfare recipients toward greater self-sufficiency. Using meta-analysis, we assess around 100 mandatory U.S. welfare-to-work programs, which were evaluated by random assignment, to identify those performing best and worst in terms of their effects on participants' earnings and the proportion of participants moving off welfare. Controlling for participant and site characteristics, the program features that differentiate “good” from “bad” programs are found to be whether the interventions increased the use of sanctions, job search, and work experience, which increases the size of program effects, and whether they incorporated financial incentives, which reduces program effect sizes. However, rather than abandoning financial incentives wholesale, we argue for a better balance of welfare reform measures to help to reduce the risks of poverty and to community cohesion, as well as ensure greater self-sufficiency.

Research on the project described in this document was funded, in part, by the Administration of Children and Families of the U.S. Department of Health and Human Services. All the views expressed are those of the authors and do not necessarily reflect the views of the funding agency.

Notes

5. Greenberg & Wiseman, op. cit.

8. Hamilton, et al., 2001.

13. The latter study included four evaluations of voluntary welfare-to-work programs in addition to the 27 evaluations of mandatory programs and also recorded and analyzed the impacts for programs targeted at two-parent families. It also updated impacts for the evaluations used by Ashworth et al. (Ashworth, K., Cebulla, A., Greenberg, D., & Walker, R. Meta-evaluation: Discovering what works best in welfare provision. Evaluation 2004, 10 (2), 193–216.), when estimates for additional time periods had become available since that study was undertaken.

15. Ibid., pp. 34–50.

16. Applying a hierarchical statistical model to data from the six sites that made up California's Greater Avenues for Independence demonstration (GAIN), Dehejia (Dehejia, R. H. Was there a Riverside Miracle? A hierarchical framework for evaluating programs with grouped data. Journal of Business & Economic Statistics 2003, 21 (1), 1–11) also highlighted the importance of site characteristics on programs impacts.

17. Table A.1 in Appendix 1 shows the mid-year of random assignment, that is, the year between the first and the last year during which eligible individuals were randomly assigned to either program participant status or control group status for the purpose of the evaluation.

18. Voluntary welfare-to-work programs that are listed in Table A.1 are not included in the analysis presented in this article because they cannot be appropriately pooled with mandatory programs (see Friedlander, D., Greenberg, D. H., & Robins, K. Evaluating government training programs for the economically disadvantaged. Journal of Economic Literature 1997, 35, 1809–1855.). (For a review of the effectiveness of voluntary training programs in improving the earnings of participants, see Greenberg, D., Michalopoulos, C., & Robins, P. K. A meta-analysis of government-sponsored training programs. Industrial and Labor Relations Review 2003, 57 (1), 31–53). In addition, the Wisconsin Self-Sufficiency First/Pay for Performance Program, a mandatory program, is also excluded from the meta-analysis. This evaluation was subject to a number of technical problems and, consequently, only limited confidence can be placed in the estimates of program impacts that were produced by it.

19. TANF ended federal entitlement to assistance and created block grants to fund State expenditures on benefits, administration, and services to needy families. It also placed time limits on the receipt of welfare assistance and changed work requirements for benefit recipients. In addition, its passage meant that states no longer needed.

20. The upper (lower) bound of the 95 percent confidence interval for each individual impact estimate and for the mean impact is computed by first multiplying the impact's standard error by 1.96 and then adding (subtracting) the resulting value to (from) the impact estimate. The standard error of the mean impact is computed as the square root of the sum of the inverse of the variances of the individual impact estimates divided into one (see Shadish, W., & Haddock, C. K. (1994). Combining estimates of effect size. In: Cooper, H., & Hedges, L. V., Eds., the handbook of research synthesis. New York: Russell Sage Foundation).

21. More precisely, the regressions are used to estimate what the impact of each individual intervention would be if the intervention's values for the explanatory variables in the regressions were identical to the mean values for the sample of interventions.

22. See Greenberg, 2005, op. cit.

24. This variable excludes expenditures on financial incentives and any program effects on AFDC payments.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 663.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.