Abstract
When the social impact bond (SIB) model was first introduced in 2010, there were many claims about how these projects could transform public service delivery. This article investigates how SIB projects have been designed in relation to two original intentions: (1) shifting the focus of public service delivery to achieving impact and (2) transferring risk from the government to external investors. Qualitative content analysis of SIB projects launched until 2020 in the US and UK (n = 114) is used to plot and analyze how the design of SIB projects vary across these intentions. We find that SIB design in practice has deviated from the model’s original intent in several important ways: In the UK, payable outcomes are generally not subject to rigorous validation methods. In both the US and the UK, upfront capital is generally independent and at-risk, but risk mitigation strategies may limit the intended transfer of risk to investors.
Notes
Acknowledgment
We would like to thank the provider, investor, intermediary, and government partners who provided access to information that allowed this work. We would also like to thank Clare Fitzgerald and Alec Fraser for organizing the special issue symposium at the 2021 Social Outcomes Conference. Thanks also to Michael Bryden for proofreading the article. Dr Carter acknowledges support through the UKRI Future Leaders Fellowship (grant number MR/T040890/1). Finally, we would like to thank the field experts and Government Outcomes Lab team members who served an advisory role to help strengthen the paper's methodology and conclusions.
Disclosure statement
No potential conflicts of interest are reported by the authors.
Notes
1 OECD stands for the Organization for Economic Cooperation and Development, a group of 37 countries that are typically democratic and support free-market economies (Kenton 2020).
2 We have separated commercial and social investors to get a more accurate picture of the investor types across US and UK SIB projects. Distinguishing types of investors has relevance for questions around new market entrants, incorporation of private-sector logics, and the potential to scale the SIB model. Scores for ‘primary investor type’ reflect this distinction, with commercial investors scoring higher. However, in composite ‘Nature of the capital’ scores, projects with either social or commercial investors are scored identically. This is because both commercial and social investors represent ‘independent’ capital.
3 One individual was included as both an expert practitioner and in the advisory group.
4 The 12 projects included in the pilot phase spanned geography, time, and outcomes funds. These included: 2010 Peterborough, 2012 Innovation Fund Teens & Toddlers, 2013 Essex, 2013 New York City, 2014 Massachusetts Juvenile Justice, 2015 Fair Chance Fund Fusion Housing, 2015 Youth Engagement Fund Sheffield, 2016 Commissioning Better Outcomes (CBO) Fund West London Zone, 2016 South Carolina, 2017 Oklahoma, 2018 CBO Surrey SIB, and the 2018 Jefferson County.
5 The group of expert practitioners have worked on or been connected to a large share of the SIB projects included in this analysis. Individually, for the projects with which they had familiarity, members of this group were able to verify our findings as well as provide us with project-specific details to help populate instances of missing data.
6 The authors worked through the coding frame step-by-step with an advisory group of SIB academics and practitioners (n = 11), incorporating their critiques and suggestions and securing validation of the coding frame construction. As a result of this process, we edited the payment model scores to range from 0-3 instead of 0-4, removing a score for exactly half of payments made for outcomes (which signalled a degree of precision in the data that we didn’t have). We discussed how practitioners might interpret thresholds between classifications, and whether the same projects might be classified in the same way by different parties to the deal. We also reached consensus that for the “payment linked to impact” composite score, the strength of the validation method should weigh comparatively more than the payment model. For the “nature of the capital” composite score, we the authors and advisory group concurred that the strength of the repayment structure should weigh comparatively more than the primary investor type.
7 Even though only two projects scored either a 0 or 1, we retain the five-point scale (rather than condensing to a three-point scale) to a) demonstrate that nearly all projects include capital that is at least moderately at-risk, and b) to enable identification of low scoring projects with limited independent, at-risk capital.
Additional information
Notes on contributors
Christina Economy
Christina Economy is a DPhil graduate from the Government Outcomes Lab at the Blavantik School of Government, University of Oxford. Her research seeks to understand how government agencies can provide social programs more effectively to better serve vulnerable populations.
Eleanor Carter
Eleanor Carter is a UKRI Future Leaders Research Fellow at the Blavatnik School of Government, University of Oxford and is also Research Director for the Government Outcomes Lab. Eleanor's research investigates challenges in coordinating complex public service delivery networks and cross-sector partnerships.
Mara Airoldi
Mara Airoldi is the Academic Director for the Government Outcomes Lab at the Blavantik School of Government, University of Oxford. Mara is an Economist and Decision Analyst by background and holds degrees from Bocconi University in Milan and the London School of Economics and Political Science. Her research is motivated by a desire to improve decision making in government, with a special interest and extensive expertise in the field of healthcare.