1,275
Views
4
CrossRef citations to date
0
Altmetric
Articles

To measure or not to measure?An empirical investigation of social impact measurement in UK social enterprises

&

ABSTRACT

Social enterprises (SE) – organizations with a dual mission to generate economic and social value – have become important players in the delivery of public services in the UK and elsewhere. While public sector value-for-money imperatives encourages these hybrid organizations to provide estimates of their social and economic impact, relatively little is known about who does so. Using institutional perspectives and large-sample data produced by Social Enterprises UK, we empirically document the uptake of social impact measurement in this sector and the extent to which context, the nature of the impact and stakeholders involvement explain differences in participation rates.

This article is part of the following collections:
Hybrid futures for public governance and management

Social enterprises (SEs) pursue social missions using market mechanisms. As such, they face complex governance challenges. They produce both public and private goods and since the value of the former is typically much more difficult to measure, the tendency is to emphasize the latter. This is consistent with dominant traditional accounting paradigms that emphasize financial results and easily measured outcomes (Ebrahim, Battilana, and Mair Citation2014; Gibbon and Dey Citation2011; Millar and Hall Citation2013).

This tendency has intensified with the move towards New Public Management (NPM) promoted in Europe and elsewhere as resource-constrained welfare systems increasingly rely on SEs and nonprofit organizations to design and deliver public services (Powell, Gillett and Doherty 2018; Doherty, Haugh, and Lyon Citation2014; Arvidson and Lyon Citation2014; Mason Citation2012).

NPM offers prescriptions, by which public sector organizations should be designed, organized, managed and, ultimately, function. The overarching principle of NPM is to make public sector organizations and by default those interacting with it, ‘performance, cost, efficiency and audit-oriented’ (Diefenbach Citation2009, 893). Consequently, NPM-inspired social impact measurement schemes, such as Social Return on Investment (SROI) and Social Accounting and Audit (SAA) have become an important part of governmental regulatory approval processes in the assignment of contracts, grants and other things to social enterprises (Fazzi Citation2012; Cunningham, Baines, and Charlesworth Citation2014; O’Dwyer and Unerman Citation2007; Millar and Hall Citation2013).

For many proponents, social impact measurement/assessment is more than a stand-alone process limited to the measurement of social outputs. Full engagement with these approaches can help SEs run more effectively and keep operations aligned to missions (e.g. McLoughlin, Kaminski, Sodagar, Khan, Harris, Arnaudo and McBrearty Citation2009; Barman Citation2007). Moreover, SEs that embrace social impact measurement may be more likely to create the participatory and deliberative processes that facilitate community discussions about the proposed social impacts of the organization and its activities (Esteves, Franks, and Vanclay Citation2012; Fazzi Citation2012; Arvidson and Lyon Citation2014). Social impact measurement can also act as an organizational legitimacy control verifying that the SE has respected its self-imposed rules (statute, mission, programme of action) and the legal norms applicable to its institutional formula (Suchman Citation1995; Nicholls 10b; Bagnoli and Megali Citation2011).

In practice, however, SEs pursues very different social missions which makes comparisons of social impact complex and challenging (Doherty, Haugh, and Lyon Citation2014; Millar and Hall Citation2013). In other words, the concept of social impact lacks coherence and robustness and since it is largely self-determined, it can be subject to manipulation (Arvidson and Lyon Citation2014; Barman Citation2007). In particular, critics argue that NPM approaches to social impact measurement promote a one-dimensional focus on funder perspectives, invariably prioritizing investing stakeholders over others (Defourny and Nyssens Citation2010; Esteves, Franks, and Vanclay Citation2012; Julnes and Holzer Citation2001; Millar and Hall Citation2013). This, they claim, encourages mission drift towards the objectives of outside resource providers and amplifies the risk of managerial capture and political hijacking (e.g. Diefenbach Citation2009; Ebrahim, Battilana, and Mair Citation2014; Powell and Osborne Citation2020). Also, when accountability schemes are perceived as being controlled by ‘outside’ stakeholders for purposes of comparisons with competitors and/or to oversee performance management, they can have detrimental effects on SE organizational culture and staff morale (Christensen and Ebrahim Citation2006; Hwang and Powell Citation2009; Ebrahim Citation2005; Gibbon and Dey Citation2011).

Accordingly, and as the social enterprise model continues to spread, so is the realization that social accounting frameworks are not only inadequate for this hybrid organizational form but are also damaging its development and future sustainability (O’Dwyer and Unerman Citation2007; Liston-Heyes et al. Citation2017). These tensions suggest that there may be obstacles to the adoption of social impact measurement schemes beyond awareness and resource constraints.

Given the scope of the debate, it is surprising how little is known about social measurement by SEs in practice. An exception is Bertotti, Leahy, Sheridan, Tobi and Renton (Citation2011) who provide descriptive statistics focused on the UK health and social care sectors using the State of Social Enterprise Survey 2009. Another is a study by Maas and Grieco (Citation2017) who use an international sample of 3194 SEs from the Global Entrepreneurship Monitor data to investigate the relationship between the nature of SE mission and its decision to measure social impact. Our analysis complements and extends this research by investigating the factors that motivate (or hinder) social impact measurement in UK-based SEs that participated in the 2017 State of Social Enterprise Survey. As in earlier studies, the data we use are self-reported and subject to variation in what constitutes social impact and how it is measured.

Informed by the literature and guided by the concept of organizational legitimacy, we present a model that connects the measurement of social impact on SE attributes. More concretely, we posit that the SE context, the nature of its impact, nonfunding and funding stakeholders’ engagement in SE decisions will influence propensities to measure social impact. This theoretical discussion appears in the first section of the paper. Section 2 explains the data and the ordered and simple Logit regression approaches we use to test the hypotheses. Results are presented in Section 3 and discussed in Section 4. The paper ends with brief conclusions highlighting contributions to the academic and practitioner literatures and important caveats.

Social enterprises and social impact measurement

We use institutional theories to conceptualize the pressures that frame SEs’ decisions to measure their social impact and in the development of testable hypotheses.Footnote1 Our model is informed by DiMaggio and Powell (Citation1983)’s isomorphic processes (coercive, mimetic and normative) which provide useful tools in identifying the forces that regularize the sector and homogenize its practices. We also have invoked Suchman’s (Citation1995) who revisits these dynamics through the lens of organizational legitimacy. Organizational legitimacy incorporates both institutional legitimacy – which focuses on the pressures and dynamics that transcend any single organization’s purposive control – and strategic legitimacy – which emphasizes the managerial perspective and instrumental manipulations by organizations to garner societal support. As with other SE researchers, we argue that these perspectives are useful in describing the events that shaped the development of the sector and in explaining SEs’ stance towards social impact measurement (e.g. Arvidson and Lyon Citation2014; Barman Citation2007; Nicholls Citation2010b; Bagnoli and Megali Citation2011). Evidence suggests that for UK SEs, organizational legitimacy has and continues to be driven by public sector ideologies, norms and regulations (Teasdale, Alcock, and Smith Citation2012; Powell, Gillett, and Doherty Citation2019).

The perceived potential of SEs to operate as surrogate public organizations was highlighted by the severe austerity measures that followed the financial crisis, pressuring governments to do ‘more with less’. Framed by notions of ‘Big Society’ and ‘Third Way’, the SE sector experienced unprecedented growth as governments reduced their direct involvement in public service delivery while encouraging SEs to fill this gap through government grants and contracts (Power and Osborne Citation2020; Dey and Teasdale Citation2016; Hall, Miller, and Millar Citation2016). Concerns emerged whether the potential of SEs in public service delivery had been exaggerated and whether this hybrid form was financially viable in the long term (e.g. Mason Citation2012; Powell, Gillett, and Doherty Citation2019; VanSandt, Sud, and Marmé Citation2009; Sud, VanSandt, and Baugous Citation2009). Perhaps unsurprisingly, the expansion of the sector was accompanied by pressures to demonstrate its achievements through NPM-inspired performance measurement and value-for-money principles (Millar and Hall Citation2013; Barman Citation2007). For SEs, failure to do so could reduce the visibility of their contribution and undermine their access to resources (funds and in-kind). Moreover, for policy-makers demonstrating such value could help legitimize the transfer of social responsibilities and accelerated growth of the sector to the wider public (Mason Citation2012; Nicholls Citation2010b; Gibbon and Dey Citation2011).

Responses to institutional pressures to measure performance differs across the SE sector. Some welcome social measurement schemes as tools that help demonstrate and ‘frame’ SE effectiveness to external stakeholders, providing a competitive advantage in the tender of public sector contracts and grant applications (Ryan and Lyne Citation2008; Peattie and Morley Citation2008; Lee and Huang Citation2018). Such schemes can also serve a pedagogic function by providing guidance and control for the organization and helping staff to analyse and improve their services (Arvidson and Lyon Citation2014; Diefenbach Citation2009; VanSandt, Sud, and Marmé Citation2009). On the other hand, for some SEs measuring social impact can be prohibitively costly and/or divert too many resources away from key activities. This is particularly so when most of the SE’s activities are ‘soft’ and require subjective value judgements (Millar and Hall Citation2013; Kendall and Knapp Citation2000). These valuations can also become contentious when they are used for competitive comparisons in the allocation of funding (Ryan and Lyne Citation2008; Christensen and Ebrahim Citation2006; Hwang and Powell Citation2009; Ebrahim Citation2005). Social impact measures are also associated with managerial capture, mission drift and in some cases, tensions linked to ideological and cultural differences of opinions with respect to the necessity and organizational consequences of measuring social value (Millar and Hall Citation2013; Diefenbach Citation2009; Cunningham, Baines, and Charlesworth Citation2014). Developing and using performance measures of any kind often involves transformations that may be threatening to an organization – whether these threats are real or not – and lead to long-term decreases in actual performance (Julnes and Holzer Citation2001; Arvidson and Lyon Citation2014; Diefenbach Citation2009).

In other words, there are substantial pressures for SEs to mimic the management systems of public sector agencies but there are also grounds for resistance. This may explain, at least in part, the relatively slow uptake of social measurement tools in the SE sector (Peattie and Morley Citation2008; Bertotti et al. Citation2011; Millar and Hall Citation2013). Against this background, we investigate some of the institutional factors that may be encouraging/discouraging social impact measurement in SEs.

While our analysis focuses on the dichotomous ‘to measure or not to measure’ decision, we recognize that social impact is socially constructed and there are tensions over how it is defined and measured (Barman Citation2007; Hwang and Powell Citation2009). These tensions can incentivize impression management, decoupling, symbolic compliance and deflection amongst other strategic and/or coping behaviours engendered by compliance with the performance measurement process (Julnes and Holzer Citation2001; Dey and Teasdale Citation2016; Arvidson and Lyon Citation2014). These are important and connected issues but we lack the data for a more nuanced investigation.Footnote2 Nonetheless, we argue that a systematic inquiry into the patterns of social impact measurement adoption can provide preliminary insights into SE pressures for conformance to sector norms (Ebrahim and Rangan Citation2014). Implicitly, we recognize that the context in which the more general decision to measure social impact is as important, if not more, than the actual measures that are used (Julnes and Holzer Citation2001).

With this in mind, we propose four sets of factors reflecting different isomorphic pressures that influence the SE’s decision to measure social impact (Nicholls Citation2010a; Mason Citation2012; Dey and Teasdale Citation2016; Arvidson and Lyon Citation2014).

H1 – SE Context

Measuring social impact is a knowledge-intensive endeavour (Julnes and Holzer Citation2001; Kendall and Knapp Citation2000; Liston-Heyes et al. Citation2017). Accordingly, we posit that SEs operating in environments where this knowledge is more easily accessed will find it easier to measure their social impact (Millar and Hall Citation2013).

One implication is that SEs with the knowledge to measure impact is more likely to be located in areas with greater social capital. More generally, geographical proximity or copresence to wealthier urban centres facilitates philanthropic support of all kind including volunteering, donations (monetary and in-kind), participation to advisory board membership and access to corporate resources (Liston-Heyes et al. Citation2017; Mason Citation2012; McCulloch, Mohan, and Smith Citation2012; Mohan Citation2012). SEs that are embedded in social networks will have easier access to the ‘new breed’ of paid managers that have been trained in the art of NPM and value-for-money principles (Diefenbach Citation2009; DiMaggio and Powell Citation1983; Ebrahim and Rangan Citation2014; Millar and Hall Citation2013). These employees tend to have more homogeneous preferences aligned with greater formalization and NPM ideas (Fazzi Citation2012). In other words, isomorphic pressures on SEs will vary considerably according to geography (Clifford Citation2012; Mohan Citation2012). In the UK, one expects higher levels and/or easier access to social capital, greater levels of professionalization and the ratification of common administrative norms in more affluent areas that are only available to those with the resources to locate there (McCulloch, Mohan, and Smith Citation2012; Ebrahim and Rangan Citation2014; Esteves, Franks, and Vanclay Citation2012). For these reasons, we posit that SEs located in London will have easier access to resources that facilitate social impact measurement than those based elsewhere (H1a).

Other than location, we postulate that social franchises may find it easier and/or be under greater pressure to measure their social impact. Social franchising involves a SE (the franchisor) licencing its business operating systems, products/services, or branding to other SEs (the franchisees) in exchange for agreed fees or sale-based payment (Lyon and Fernandez Citation2012; Tracey and Jarvis Citation2007). Adoption of the social franchising format can thus provide SEs with additional conformance pressures and an organizational format that has already been tested for financial viability and social impact (Bloom and Chatterji Citation2009; Dees, Anderson, and Wei-Skillern Citation2004; DiMaggio and Powell Citation1983; VanSandt, Sud, and Marmé Citation2009). Ebrahim and Rangan (Citation2014) argue that organizational growth, coalition building and replication can facilitate legitimization and the survival of the organization. Accordingly, we posit that SEs who have replicated or franchised their operations will have easier access to the knowledge and systems that facilitate social impact measurement (H1b).

We also hypothesize that SEs that self-describe as ‘COOPs’ will be relatively more resistant to isomorphic pressures than their counterparts. SE who perceive themselves as such will be more anchored in the cooperative traditions of collective social action and this will impact on accountability responsibilities and preferences (Doherty, Haugh, and Lyon Citation2014; Defourny and Nyssens Citation2010; Ebrahim, Battilana, and Mair Citation2014; Powell, Gillett, and Doherty Citation2019). These organizations run themselves along collectivist-democratic lines and pride themselves on being self-managed without outside interference (Rothschild Citation2009). COOPs promote community-based structures, which increases the involvement of members without professional management skills in the governance of the SE. This can dilute direct and indirect pressures from external stakeholders to conform to sector norms (Cornforth Citation2014). Gibbon and Dey (Citation2011) suggests that these organizations are particularly resistant to proposals that privilege financial over social imperatives. Additionally, in many countries, SEs predominantly assumes the form of the cooperative enterprise (Fazzi Citation2012). For these reasons, we posit that SEs who identify themselves as COOPs is less likely to engage with social impact measurement schemes. The following hypotheses reflect these conjectures:

H1a: SEs that are based in London are more likely to measure social impact.

H1b: SEs that are franchised are more likely to measure social impact.

H1c: SEs that identify themselves as ‘COOPs’ are less likely to measure social impact.

H2 – Nature of SE impact

By definition, SEs will be involved in trading activities or ‘earned income’, of one kind or another in the pursuit of their social objectives.Footnote3 They will, however, differ in how they perceive the impact they are having on society. More concretely, some SEs consider the direct provision of social, community and environmental services as their main social contribution (Lee and Huang Citation2018). Others believe that their social impact is realized through the employment of disadvantaged people. There are also SEs whose focus is not on the direct provision of social goods, but rather on generating revenues for parent or partner organizations involved in solving specific social problems. Impact measurement in such cases can be potentially easier to produce (Bagnoli and Megali Citation2011). In other words, the challenges of measurement will differ within the SE sector according to the nature of the social impact created by the organization (Doherty, Haugh, and Lyon Citation2014; Millar and Hall Citation2013; Bagnoli and Megali Citation2011). Ebrahim and Rangan (Citation2014) also highlight the difficulties of measuring social outcomes in SEs where the full value of social contributions is moderated by events beyond organizational boundaries. They also suggest that SEs with a narrower scope of activities and shorter timelines will find it easier, less costly and less controversial to develop and present performance indicators. Boyne (Citation2002) and Hall, Miller, and Millar (Citation2016) argues that organizations exhibit different levels of ‘publicness’ and SEs that are closer to the private sector model (i.e. profit maximizing profits for shareholders and owners) will tend to have less ambiguous and more measurable goals.

Given differences in the ‘measurability’ of SEs’ impact, we posit that measurement decisions will also depend on the nature of the social impact SEs believe they are producing (Maas and Grieco Citation2017; Kendall and Knapp Citation2000; Millar and Hall Citation2013). This is captured in the following:

H2a: SEs that create impact mainly through the delivery of public services are more likely to measure social impact.

H2b: SEs that create impact mainly through who they employ are less likely to measure social impact.

H2c: SEs that create impact mainly through the profits they create or gift to other social good producers are more likely to measure social impact.

H3 – Non-Funding Stakeholder Involvement in SE Decisions

Our third set of hypotheses suggests that SEs’ willingness to measure social impact will depend on the extent to which its stakeholders are involved in the process. More concretely, we recognize that SEs are pressured by funders to measure impact (see H4) but they may also do so to achieve organizational legitimacy vis-à-vis stakeholders and manage their demands on the organization (Esteves, Franks, and Vanclay Citation2012; Arvidson and Lyon Citation2014; Diefenbach Citation2009; Kendall and Knapp Citation2000). According to Fazzi (Citation2012), a multistakeholder decision-making process is the best guarantee of transparency and control. Stakeholder-inclusive governance models facilitate and enhance institutional and strategic legitimacy (Bagnoli and Megali Citation2011; Nicholls Citation2010a; VanSandt, Sud, and Marmé Citation2009; Esteves, Franks, and Vanclay Citation2012), promote value co-creation and generate a competitive advantage with target communities (Powell, Gillett, and Doherty Citation2019; Osborne, Radnor, and Strokosch Citation2016). Lee and Huang (Citation2018) also explore how careful framing through social impact measures can mobilize support from different stakeholders and facilitate the realization of organizational goals. On the other hand, failure to engage staff and other stakeholders in the process can lead to emasculation and feelings that the scheme is ‘controlling’ rather than ‘supportive’ (Gibbon and Dey Citation2011; Boyne Citation2002; Fazzi Citation2012) and increase stakeholder-related risks to the organization (Esteves, Franks, and Vanclay Citation2012). In other words, the extent to which internal and external stakeholder groups such as staff, consumers/beneficiaries and the wider community are involved in the SE’s efforts at self-evaluation can facilitate and motivate the adoption of measurement schemes (Bagnoli and Megali Citation2011; Julnes and Holzer Citation2001; Arvidson and Lyon Citation2014). Accordingly, we propose the following:

H3a:SEs that involve their community in their decision-making processes are more likely to measure social impact.

H3b:SEs that involve their staff in their decision-making processes are more likely to measure social impact.

H3c:SEs that involve their beneficiaries in their decision-making processes are more likely to measure social impact.

H4 – Funding Stakeholders

As discussed above, the hybrid SE form grew in popularity when the UK endorsed ‘Third Way’ and ‘Big Society’ ideals that emphasized a heightened role for volunteering and social entrepreneurship in the delivery of public services (Doherty, Haugh, and Lyon Citation2014; Millar and Hall Citation2013; Powell, Gillett, and Doherty Citation2019; Hall, Miller, and Millar Citation2016). Its growth accelerated further when the conservative government implemented massive public expenditure reductions in an effort to contain the economic ramifications of the 2007 financial crisis (Cunningham, Baines, and Charlesworth Citation2014; Mohan Citation2012). While delegating some of its responsibilities for the production of social goods by the outsourcing of contracts and the allocation of grants, public sector managers retained considerable power on these organizations through NPM-inspired reporting and auditing requirements (Gibbon and Dey Citation2011; Powell Citation2007; Kendall and Knapp Citation2000; Fazzi Citation2012). Social impact indicators can also provide useful legitimation tools to policy-makers in response to critics of public sector outsourcing and high expectations of the electorate (Mason Citation2012; Nicholls Citation2010b; Gibbon and Dey Citation2011).

Arvidson and Lyon (Citation2014) report SEs observing marked changes in the frequency with which public sector grant bodies referred to social impact measures and evaluation tools. These insights reflect the scale and scope of the isomorphic pressures on SEs to conform to performance measurement norms exerted by public sector managers operating under value-for-money and transparency mantras. In so far as NPM emulates techniques from the private sector and that SEs also receives funding from private sector organizations (Ebrahim and Rangan Citation2014; Boyne Citation2002; Liston-Heyes et al. Citation2017; McCulloch, Mohan, and Smith Citation2012; Sud, VanSandt, and Baugous Citation2009), we hypothesize that reliance on government funding and trade with the private sector (as opposed to reliance on trade with the general public, other third sector organizations, membership fees, donations and other forms of revenue collection) will incentivize the adoption of social measurement schemes:

H4a:SEs whose main source of income is from the public sector are more likely to measure social impact.

H4b:SEs whose main source of income is from the private sector are more likely to measure social impact.

Controls

Many factors other than those hypothesized above are likely to influence the SE’s decision to measure social impact. In particular, a SE’ financial resources can facilitate the measurement process and size can ensure that the SE reaps any economies of scale and scope associated with the implementation of a measurement system (Julnes and Holzer Citation2001; Kendall and Knapp Citation2000). While we do not have access to reliable SE financial data, we used two indicators of ‘size’ to capture these effects (i.e. paid employees and scale of operation).

Another key control routinely used in entrepreneurship studies is the age of the organization. Since younger SEs have limited resources, they are more likely to seek start-up funding from foundations and granting agencies with stricter social reporting norms (Liston-Heyes et al. Citation2017). Relatedly, the strategic benefits of ‘venture framing’ through social impact measurement are much more consequential in the early-stage of an organization as funders and other stakeholders have little else on which to base their assessments (Lee and Huang Citation2018). To alleviate the risks inherent in the management of these new business forms, younger entrepreneurs are more likely to mimic the dominant models promoted by their peers (VanSandt, Sud, and Marmé Citation2009; DiMaggio and Powell Citation1983). Accordingly, we control for SE age, SE size and SE operating scale throughout our analyses.

Sources and Methods

We obtained data from the State of Social Enterprise Survey 2017 commissioned by Social Enterprise UK. Survey responses were gathered via telephone interviews and online surveys primarily with the individual in day-to-day control of the business and/or the individual specifically responsible for the financial affairs of the SE. A two-step filter was used to ensure that the sample reflected the landscape of social enterprises, i.e. organizations were only considered to be in the scope of the survey if they defined themselves as a social enterprise and generated 25% or more of their income from trading activities.

This produced a sample of 1060 useable responses, the largest dataset on the state of social enterprises in the UK available at the time. While the field research team (BMG UK) made efforts to produce a representative sample of UK SEs, nonprobability sampling strategies were used which reduces the potential for generalizability and statistical inference to the larger population of UK SEs. Recent UK government statistics suggests that this population is composed of 471,000 UK SEs, most of these (n = 371,000) operating with no employees (Dept. BE&IS, UK Gov. Citation2017).

Nonetheless, descriptive statistics of key SE indicators (including the percentage of organizations identified as registered charities, cooperatives, community businesses, social firms, etc.) are comparable to responses of surveys conducted in 2011, 2013 and 2015, adding some external validity to our findings.Footnote4 Details of the survey methodology are available at Social Enterprise UK (Social Enterprise UK Citation2017, 12). The questions used to derive the study variables appear in the supplementary material.

In line with our top-level research question – that is, what factors predict SE social impact measurement – we looked for variables that would allow us to test the four hypotheses using logit regression analyses. Identifying the dependent variable was straightforward since the questionnaire directly asks SE respondents to rate his or her level of agreement on a scale of 1 (not at all) to 4 (to a large extent) with statement Q56: ‘To what extent does your organization measure its social impact?’ We note that while the survey does not explicitly define social impact, respondents are encouraged to differentiate between economic, environmental and social spheres of activity by the way the questions are structured and formulated. The social impact question (Q56) is located towards the end of the survey after the finance section. The survey does not refer to any particular social impact measurement systems or metrics (See for more details.)

Table 1. Dependent variable (MEASOCIMP)

Identifying the determinants of social impact measurement was more challenging since the questionnaire was not designed for this purpose. The first hypotheses (H1) investigate the extent to which SE context is linked to social impact measurement. Ideally, we would need information about proximity to metropolitan city centres but the survey only provides regional-level details of the geographical location of the SE although it includes ‘London’ is one of these regions (see supplementary material). Accordingly, we constructed a dummy variable identifying whether the SE is located in or outside London (GEO-LONDON). We also constructed dummies that identified whether the SE belongs to a franchise or an organization that replicates it is management systems (FRANCHISE) and whether the SE recognizes itself as a cooperative (COOP). We also tested substitutes of these three dummies, including an urban-rural dummy (URBAN), dummies capturing alternative organizational forms such as whether the SE considered itself to be a social firm (SOCFIRM), a registered charity (RCHAR), or a community business (COMBUS) and dummies capturing its legal form, i.e. company limited by guarantee (CLBYG), company limited by shares (CLBYS) and community interest company limited by guarantee (CICCLG). We retained GEO-LONDON, FRANCHISE and COOP for the regression analyses but used these substitutes in the robustness tests (see supplementary material).

The second hypotheses (H2) investigate whether the way the SE envisages the nature of its social impact affects its decision to measure it. While all the SEs in our dataset participate in trading activities of various kinds, they differ in how they use these returns to achieve their social objectives. Accordingly, we constructed three dummies that capture whether the SE believes that its social impact lies primarily in the delivery of public services (SERVICES), the employment of disadvantaged individuals (EMPLOYMENT) or the creation of profits that are gifted to a separate cause or parent charity (PROFITS FOR SOCIAL GOODS).

The third and fourth hypotheses examine potential links between stakeholder pressures and impact measurement. To capture the influence of nonfunding stakeholders (H3), we constructed three dummy variables from a questionnaire item that asked SEs to rate the extent to which community, staff and beneficiaries were actively involved in SE decision-making (COMMUNITY, STAFF, BENEFICIARIES). Funding stakeholder pressures (H4) are derived from survey questions asking SEs to select their main source of income from a list of 13 choices. From these responses, we constructed two dummy variables entitled ‘TRADE/GRANTS PUBLIC SECTOR’ and ‘TRADE PRIVATE SECTOR’ although we also tested the impact of less frequently stated response categories including trade with general public, third sector organizations, donations and membership, or subscription fees.

For the controls, and in the absence of reliable financial data, we used proxies that capture access to (or lack of) financial resources. These included age, size and scale of operation. More concretely, the survey questions that offered bandwidth response choices yielded lower nonresponse rates and we used these to generate dummies that captures whether the SE had been in operation 10 years or less (AGE – SE≤10 yrs) and similarly whether it had 10 or less employees (SIZE – SE≤10 emp). These cut-offs were determined by locating the median SE in the responses. We also used 5 yrs or less and five employees or less versions of these dummies in the robustness checks (AGE – SE≤5 yrs) (SIZE – SE≤5 emp). The scale of operation was also presented as 11 bandwidth choices (ranging from ‘your neighbourhood’ to ‘internationally’) depending on the geographic reach of the SE’s operation. The most relevant indicator for our purposes was whether the SE’s operations were conducted on an international scale (SCALE – INTERNATIONAL). More generally, using dichotomous variables throughout the analyses facilitates comparisons between odds ratios and result interpretation.

Descriptive statistics of the dependent and independent variables appear in while in the supplementary material provides point-biserial correlations (since the variables are dichotomous).

Table 2. Independent variables

We used logit regression analyses to test our hypotheses after ruling out ordered logit regression. While ordered logit regression fully exploits the ordinal responses (see column (a) in ), further testing using the Likelihood Ratio Test and the Brant Test demonstrated that the data did not support the parallel regression assumption required for this procedure (Long and Freese Citation2006).Footnote5 The OLS approach was also considered inferior since the dependent variable only contains four ordinal response categories. Accordingly, we converted the responses to Q56 into a dichotomous variable compatible with simple logit regression analysis by dividing responses symmetrically along the Likert Scale – i.e. scores 1 and 2 are assigned a ‘0ʹ and scores 3 and 4 are assigned a ‘1ʹ (see column (b)). Since there may be some ambiguity between scores 3 (‘to some extent’) and 2 (‘not very much’), we also tested versions of the dependent variable that would leave out responses in categories 3 (column (c)) and 2 (column (d)). The robustness tests were conducted using version (b) of the dependent variable as it allows the retention of all the data.

Explanatory variables were entered according to the four hypotheses in our framework () but alternative sequences were also tested. Ordered logit and OLS results are included for comparison. Since the variables used in the analyses are derived from the same survey instrument, we tested for Common Method Bias using Harman’s Single Factor Score. We also assess the extent of multicollinearity between variables.

Figure 1. To measure or not to measure?

Figure 1. To measure or not to measure?

A final parsimonious model was derived and reran on 36 subsamples of the data to assess the consistency of the results. More concretely, the subsamples in the supplementary material compares odds ratios between SEs according to years of operation (Models 11–14), a number of employees (Models 15–18), whether the SE is located in an urban or rural area (URBAN – Models 19 and 20) and investments in employee training (TRAIN – Models 21 and 22). Models 23 to 34 compare odds ratios across SEs with different organizational and legal forms as discussed above. Models 35–46 compares odds ratios across SEs with different organizational capabilities including people management (PEOPMGT), developing and implementing a business plan and strategy (BUSP&S), developing and introducing new products or services (NEWPROD), making effective use of available technology (TECH), financial management (FINMGT), marketing, branding and PR (MARKT) and reacting to regulation and tax issues (REGTX).

Results

presents the regression results conducted on the entire dataset. Model 1 includes only the control variables. Of the three, the odds ratio on the SCALE-INTERNATIONAL variable is above one throughout all the analyses and statistically significant suggesting a positive relationship with SE propensity to measure social impact. The odds ratio on AGE is also above one but its significance varies considerably across the subsamples while the odds ratio on the SIZE variable is only marginally significant on two occasions (Models 20 and 41).

Table 3. Logit, ordered logit and OLS regression analyses

The three variables capturing the SE context (H1) are added in Model 2. All the odds ratios are significant at the level p = 0.05 or less. The odds ratios for GEO-LONDON and FRANCHISE are above one, whereas the odds ratio on COOP is negative suggesting an inverse relationship with social impact measurement. Our results align with H1 but adding the additional variables and testing them across subsamples help determine their strength. None of the odds ratio of the variables relating to the nature of SE impact (H2) are significant (Model 3). This is a surprising result since the ease with which social impact can be measured across these three sets of activities will vary considerably. Adding these variables do not change the strength nor significance of the odds ratio relating to H1 variables.

H3 and H4 posit that SEs may be more willing to measure their social impact if pressured by their stakeholders (Models 4 and 5). We find that SEs who involve the community and staff members in their decision-making is more likely to measure social impact – i.e. the odds ratios are greater than one and significant – but the odds ratios on the involvement of beneficiaries are not significant. As for funding stakeholders (H4), only the odds ratio on the TRADE/GRANTS PUBLIC SECTOR indicator is statistically significant. These are greater than one confirming the hypothesis that SEs whose main income is from the government is more likely to measure social impact. We tested the order in which the variables were entered and found no significant differences in the results.

Model 6 presents a parsimonious version of the model where nonsignificant variables are removed. This reduced specification is used for further testing. Models 7 and 8 examine the sensitivity of the results in different versions of the dependent variable and the treatment of the ‘middle’ response categories (see ). The results remain qualitatively unchanged providing support for our use of a dependent variable that identifies SEs with scores 3 and 4 as ‘measuring social impact’ and those with scores 1 and 2 are ‘not measuring social impact’. Models 9 and 10 display the results from the ordered logit and OLS regressions as benchmarks. Multicollinearity was ruled out since the highest VIF we encountered was 1.050. Using the Harman’s Single Factor Score approach, we found that none of the factors was in excess of 50% suggesting that CMB is unlikely to affect our results (Podsakoff et al. Citation2003).

We subsequently tested the logit regression results across 36 different subsamples of the data to assess their robustness (see supplementary material, Tables A2-A4). We found surprising consistency throughout the subsamples with no observed qualitative changes in the nature of the odds ratios (i.e. those greater than one remained so and vice versa) although there is weaker or absence of significance with smaller sample sizes (i.e. standard errors tend to be greater in smaller samples, reducing the statistical significance of the odds ratios). We note in particular that the odds ratios associated with FRANCHISE and COMMUNITY are consistently above one while those on the COOP variable are always below one and statistically significant throughout almost all of the 44 models. The strength of these results is reflected by the (++) and (–) symbols in the result summary (). The sign of the odds ratios on the other variables (GEO-LONDON, STAFF and TRADE/GRANTS PUBLIC SECTOR) remains above one indicating positive relationships with the SE propensity to measure social impact but we observe more instances of nonsignificance particularly with smaller sample sizes. These relationships are denoted (+) in while the variables that were not significant in the analyses are crossed out.

Figure 2. Results synthesis

Figure 2. Results synthesis

Discussion

Our results point to the context in which a particular SE operates being an important predictor of the likelihood that they measure social impact. SEs located in London may be more prone to normative isomorphic pressures as they are embedded in philanthropic networks that are more familiar and converted to, NPM-style measurement schemes (McCulloch, Mohan, and Smith Citation2012; Mohan Citation2012; Millar and Hall Citation2013; Ebrahim and Rangan Citation2014). Similarly, franchised SEs will typically have greater access to expert advice and more pressures to adopt tried-and-tested management systems that will facilitate, amongst other things, the measurement of social impact (Dees, Anderson, and Wei-Skillern Citation2004; VanSandt, Sud, and Marmé Citation2009; Bloom and Chatterji Citation2009; Lyon and Fernandez Citation2012). The results also indicate that SEs who identify themselves as COOPs are less likely to measure social impact, possibly reflecting COOP traditions of self-management and resistance to outside interference (Rothschild Citation2009; Doherty, Haugh, and Lyon Citation2014; Defourny and Nyssens Citation2010; Ebrahim, Battilana, and Mair Citation2014). We, therefore, find support for our claim that SEs who have easier access to knowledge and professional resources are more likely to measure social impact (H1a and H1b) while the COOP form seems to dampen conformance with dominant sector accountability norms and pressures (H1c).

The analysis did not support the hypotheses linking social impact measurement to differences in the ‘measurability’ SE activities (H2) (Doherty, Haugh, and Lyon Citation2014; Millar and Hall Citation2013; Bagnoli and Megali Citation2011; Maas and Grieco Citation2017). This is surprising. Our findings suggest that either the complexity of measurement is not a significant factor in SE decisions and/or that our typology is not sufficiently refined to capture differences in the ‘difficulty’ of measurement processes across settings. It may also be the case that our dependent variable is too rudimental and that a richer measure that accounts for the intensity of the measurement effort and/or the use of specific metrics would produce different results. This highlights the need for further sector-wide data collection and analyses (Saebi Foss and Linder Citation2019).

The literature suggests that social impact measurement can enhance SE institutional legitimacy by facilitating communications and accountability reporting to stakeholders and creating the reference frames that mobilize their support (e.g. Bagnoli and Megali Citation2011; Nicholls Citation2010a, Citation2010b; Lee and Huang Citation2018). While this literature highlights differences in pressures exerted on SEs by funders versus other stakeholder groups (O’Dwyer and Unerman Citation2007; Arvidson and Lyon Citation2014; Barman Citation2007; Millar and Hall Citation2013), relatively little is known about differences in how nonfunding stakeholders such as community members, staff and beneficiaries affect SEs processes. Our findings that community and staff engagement in decision-making are linked to the utilization of social impact measures gives credence to these claims (H3a, H3b) although it is not clear why beneficiaries do not impact on the dependent variable (H3c). One possibility is that social impact measures are not effective in communications with this particular stakeholder group and/or that beneficiaries express resistance or indifference to such development (Arvidson and Lyon Citation2014). Our results thus partially support the hypothesis that SEs practising multistakeholder governance may be more willing to measure their social impact or conversely that measurement facilitates multistakeholder governance (H3). These findings emphasize our limited understanding of SE stakeholder salience to the organization but highlight an interesting avenue for future research.

The data and analyses also support our hypothesis that social impact measurement is associated with public sector trade and grants (H4a) but does not support its private-sector counterpart (H4b). These findings corroborate insights from the literature suggesting that seeking funding from public sector entities, either through grants, trade or contracts, will ‘coerce’ SEs into adopting NPM-style reporting and accountability mechanisms (Arvidson and Lyon Citation2014; Nicholls Citation2010b; Gibbon and Dey Citation2011; Cunningham, Baines, and Charlesworth Citation2014). It may also be the case that SEs who have the resources to measure social impact is more likely to seek such funds (Liston-Heyes et al. Citation2017; Julnes and Holzer Citation2001).

Regardless of causality, our results underline the scale and scope of public sector bonds to the UK SE sector.

Conclusion

Social impact measures have become important determinants of SE legitimacy and government funding in the UK and elsewhere. Yet not all SEs are opting to measure their social impact despite strong institutional pressures to do so. Some are isolated from these pressures while others believe that conformance is either too costly or damaging to the SE mission. In this study, we attempt to document the patterns of social impact measurement utilization in the UK SE sector.

Our study meets three objectives. First, it reviews the literature to identify potential sources of isomorphic pressures exerted on SEs by the public sector in the adoption of social impact measurement. The resulting narrative explains how and why the sector became under increasing pressure to conform to public sector norms of comparability, audit and value-for-money. It highlights in particular how grants and contract allocations, reporting processes, training schemes and consultants may be incentivizing SEs into measuring their social impact. Our review also suggests that the absence of an established governance framework catering to the specificity of the hybrid form may have contributed to the spread of NPM-style indicators.

Secondly, our study presents a model of social impact measurement adoption that exploits these stylized facts. The model suggests that some of the pressures exerted on SEs will be embedded in the context in which they operate (H1) and that measurement scheme adoption may vary according to the ‘measurability’ of the impact produced by SEs (H2). It also postulates that social impact measurement will be linked to SE engagement with nonfunding stakeholders (beneficiaries, staff and community) (H3) and funding stakeholders (public sector, private sector) (H4). The resulting framework provides a tool to help differentiate between SEs who measure social impact and those who do not.

Its third objective was to test the model on a recent dataset of UK SEs. Our logit regression results show that SEs that are based in London, franchised and who have the public sector as their main income source are more likely to measure their social impact while SEs that identify themselves as COOP is generally less likely to do so. We also find that SEs who involve community stakeholders – and to a weaker extent their staff – in their decision-making are more likely to measure social impact. Somewhat surprisingly, the nature of SE impact carried no explanatory power in the adoption decision.

Our results provide a number of interesting insights into SE impact measurement. Firstly, they indicate that SEs are more likely to measure their social impact if government agencies are their main source of funding. This supports the notion that the public sector has been instrumental in shaping the UK SE sector by championing the potential of SEs as a novel way of delivering public services, by transferring public service delivery responsibilities to the sector in the aftermath of the financial crisis and by endorsing NPM-inspired accountability discourse surrounding SE performance. Since SEs is often depicted as combining elements from the private and charitable sectors, our results emphasize the public sector is exerting on these organizations as hypothesized and corroborated by existing case studies. Secondly, the link between context factors (e.g. access to networks and location) and measurement decisions emphasize the strength of institutional pressures while highlighting the importance of sector associations, training, professionalization and labour markets. Thirdly, our results show that SEs that engage their community and staff stakeholders in their decision-making processes are more likely to measure social impact but this is not the case with SE beneficiaries. This suggests potential differences in the relative benefits of social measures across stakeholder groups, of relevance to those advocating their use in multistakeholder governance models. Finally, our study underscores the dearth of sector-wide information available on SEs and the need for further investment in systematic and regular data collection exercises.

In learning from the results it is important to bear in mind some caveats. The dataset, while the largest and most up-to-date for the UK sector was constructed using nonprobability sampling strategies. Since this is not a random sample, care is needed in extrapolating results in the wider population. Our results describe empirical patterns found in this subset of the population. Second, our regressions help identify the characteristics of SEs who measure social impact versus those who do not. While we postulate on the mechanisms underlying these relationships, our analyses do not establish causality. Thirdly, our dependent variable does not assess the SE’s interpretation of social impact, the ‘type’ of impact measurement scheme that is used or the extent with which SE managers utilize it. This implies that what constitutes social impact is likely to vary considerably across SEs. Moreover, some SEs may be measuring social impact but only in a symbolic way, without fully exploiting its potential or endorsing its discourse. Our analyses do not capture these important nuances. Finally, our study identifies London as a major metropolitan area. We recognize that the UK has other metropolitan cities (e.g. Birmingham, Manchester and Leeds) that may also provide access to social capital, experienced professionals and the networks that can encourage and facilitate social impact measurement. Unfortunately, the dataset classifies SEs by regions except for London-based organizations. While additional tests indicate that an urban-rural divide does not capture these differences in environment and that even amongst urban-based SEs, there appears to be a London-specific effect, additional research is needed to explore this issue to its fullest extent.

Barman (Citation2007, 112) concludes her historical account by stating that we should ‘… view the extent of nonprofits’ need to quantity as the inverse of the size and scope of the central government’. Our study provides empirical support for this narrative and complements it by providing preliminary results on who is quantifying and possibly why.

Supplemental material

Supplementary Material

Download PDF (478.3 KB)

Supplementary Material

Supplemental data for this article can be accessed at https://doi.org/10.1080/14719037.2020.1865435.

Disclosure statement

No, potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the British Academy/Leverhulme Small Research Grants (Reference no. SG162991);Leverhulme Trust [SG162991].

Notes on contributors

Catherine Liston-Heyes

Catherine Liston-Heyes is an economist with degrees from the University of Ottawa and McGill University. From 1993 to 2011 she has been a member of faculty at the School of Management at Royal Holloway College, University of London. In 2011, she joined the Graduate School of Public and International Affairs and became its director until July 2016. Her research is invariably motivated by and anchored in real public policy questions and controversies. In addition to her academic work, she has advised the OECD on regulatory and transport matters and is currently a member of the Executive Advisory and Auditing Committee for the Department of Environment and Climate Change Canada. Her work appears in a number of international peer-reviewed journals including Public Administration, Business and Society, Public Management Review, Voluntas, the Journal of Public Economics and the Journal of Business Ethics amongst others.

Gordon Liu

Gordon Liu is Professor of Marketing Strategy at the Open University Business School. He received his PhD degree from Royal Holloway, University of London. Before joining the Open University, he was an Associate Professor at the University of Bath and a Senior Lecturer at Bournemouth University. His research is situated at the intersection of marketing, strategy and entrepreneurship with particular focus on cause-related marketing, product innovation and new product development, strategic orientation and capabilities, and networks and strategic alliances. His work has appeared in a number of international peer-reviewed journals including Entrepreneurship Theory and Practice, Strategic Entrepreneurial Journal, Journal of Product Innovation Management, International Journal of Operations and Production Management, International Marketing Review, Journal of Business Research, Journal of Small Business Management, Business Strategy and The Environment, Journal of Business Ethics, Group & Organization Management, Nonprofit and Voluntary Sector Quarterly, European Journal of Marketing, International Journal of Human Resource Management, and others.

Notes

1. The analyses that follows are based on the frequently cited definition by the UK Department of Trade and Industry which refers to an SE as ‘(…) a business with primarily social objectives whose surpluses are principally reinvested for that purpose in the business or in the community, rather than being driven by the need to maximize profit for shareholders and owners’ (SE Market Trends 2017, 14). (For discussions of SE definitions, see Defourny and Nyssens (Citation2010), Powell, Gillett, and Doherty (Citation2019), and Doherty, Haugh, and Lyon (Citation2014)).

2. As noted by one reviewer, a more thorough investigation would also examine the extent to which public servants use SEs’ social impact measures to demonstrate the legitimacy of their actions and progress their own agenda.

3. Earned income is revenue generated from the sale of goods, services rendered, processes, expertise and intellectual property or work performed. This includes membership, user, program, admission, rental, and/or performance fees, conferences, symposia, event, and/or presentation services, sale of goods (new and/or used), tuition, training materials, food and catering services, newsletters, magazines, advertising sales, information products, consulting services etc. (Pue Citation2019).

4. Note that these are not panel datasets and do not allow for tracking of SEs through time. Moreover, questions differ substantially between surveys preventing pooling of responses.

5. The ordered logit regression takes full advantage of the ordinal dependent variable. However, as it produces only one set of coefficients, the procedure implicitly assumes that the relationship between each pair of outcome groups is the same (the ‘parallel regressions assumption’). In other words, the procedure assumes that the coefficient when moving from MEASOCIMP = 1 to MEASOCIMP = 2 on the Likert Scale is the same as the coefficient that describes the transition from MEASOCIMP = 2 to MEASOCIMP = 3 and so on. If this is not the case, different models are required to describe the relationship between each pair of outcome groups.

References

  • Arvidson, M., and F. Lyon. 2014. “Social Impact Measurement and Non-profit Organizations: Compliance, Resistance, and Promotion.” VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations 25 (4): 869–886. doi:10.1007/s11266-013-9373-6.
  • Bagnoli, L., and C. Megali. 2011. “Measuring Performance in Social Enterprises.” Nonprofit and Voluntary Sector Quarterly 40 (1): 149–165. doi:10.1177/0899764009351111.
  • Barman, E. 2007. “What Is the Bottom Line for Nonprofit Organizations? A History of Measurement in the British Voluntary Sector.” Voluntas: International Journal of Voluntary and Nonprofit Organizations 18 (2): 101–115. doi:10.1007/s11266-007-9039-3.
  • Bertotti, M., K. Sheridan, P. Tobi, A. Renton, and G. Leahy. 2011. “Measuring the Impact of Social Enterprises.” British Journal of Healthcare Management 17 (4): 152–156. doi:10.12968/bjhc.2011.17.4.152.
  • Bloom, P. N., and A. K. Chatterji. 2009. “Scaling Social Entrepreneurial Impact.” California Management Review 51 (3): 114–133. doi:10.2307/41166496.
  • Boyne, G. A. 2002. “Public and Private Management: What’s the Difference?” Journal of Management Studies 39 (1): 97–122. doi:10.1111/1467-6486.00284.
  • Chapman, T., D. Forbes, and J. Brown. 2007. ““They Have God on Their Side”: The Impact of Public Sector Attitudes on the Development of Social Enterprise.” Social Enterprise Journal 3 (1): 78–89. doi:10.1108/17508610780000723.
  • Christensen, R. A., and A. Ebrahim. 2006. “How Does Accountability Affect Mission? The Case of a Nonprofit Serving Immigrants and Refugees.” Nonprofit Management and Leadership 17 (2): 195–209. doi:10.1002/nml.143.
  • Clifford, D. 2012. “Voluntary Sector Organizations Working at the Neighbourhood Level in England: Patterns by Local Area Deprivation.” Environment and Planning A 44 (5): 1148–1164. doi:10.1068/a44446.
  • Cornforth, C. 2014. “Understanding and Combating Mission Drift in Social Enterprises.” Social Enterprise Journal 10 (1): 3–20. doi:10.1108/SEJ-09-2013-0036.
  • Cunningham, I., D. Baines, and S. Charlesworth. 2014. “Government Funding, Employment Conditions, and Work Organization in Non-profit Community Services: A Comparative Study.” Public Administration 92 (3): 582–598.
  • Daly, S. 2011. “Philanthropy, the Big Society and Emerging Philanthropic Relationships in the UK.” Public Management Review 13 (8): 1077–1094. doi:10.1080/14719037.2011.619063.
  • Dart, R. 2004. “The Legitimacy of Social Enterprise.” Nonprofit Management and Leadership 14 (4): 411–424. doi:10.1002/nml.43.
  • Dees, J. G., B. B. Anderson, and J. Wei-Skillern. 2004. “Scaling Social Impact.” Stanford Social Innovation Review 1 (4): 24–32.
  • Defourny, J., and M. Nyssens. 2010. “Social Enterprise in Europe: At the Crossroads of Market, Public Policies and Third Sector.” Policy and Society 29 (3): 231–242. doi:10.1016/j.polsoc.2010.07.002.
  • Department for Business, Energy and Industrial Strategy. (2017) Social Enterprise: Market Trends 2017. UK Government. Accessed 17 April 2019. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/644266/MarketTrends2017report_final_sept2017.pdf
  • Dey, P., and S. Teasdale. 2016. “The Tactical Mimicry of Social Enterprise Strategies: Acting ‘As If’ in the Everyday Life of Third Sector Organizations.” Organization 23 (4): 485–504. doi:10.1177/1350508415570689.
  • Di Domenico, M., P. Tracey, and H. Haugh. 2009. “Social Economy Involvement in Public Service Delivery: Community Engagement and Accountability.” Regional Studies 43 (7): 981–992. doi:10.1080/00343400701874180.
  • Diefenbach, T. 2009. “New Public Management in Public Sector Organizations: The Dark Sides of Managerialistic ‘Enlightenment’.” Public Administration 87 (4): 892–909. doi:10.1111/j.1467-9299.2009.01766.x.
  • DiMaggio, P. J., and H. K. Anheier. 1990. “The Sociology of Nonprofit Organizations and Sectors.” Annual Review of Sociology 16 (1): 137–159. doi:10.1146/annurev.so.16.080190.001033.
  • DiMaggio, P. J., and W. W. Powell. 1983. “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” American Sociological Review 48 (2): 147–160. doi:10.2307/2095101.
  • Doherty, B., H. Haugh, and F. Lyon. 2014. “Social Enterprises as Hybrid Organizations: A Review and Research Agenda.” International Journal of Management Reviews 16 (4): 417–436. doi:10.1111/ijmr.12028.
  • Ebrahim, A. 2005. “Accountability Myopia: Losing Sight of Organizational Learning.” Nonprofit and Voluntary Sector Quarterly 34 (1): 56–87. doi:10.1177/0899764004269430.
  • Ebrahim, A., J. Battilana, and J. Mair. 2014. “The Governance of Social Enterprises: Mission Drift and Accountability Challenges in Hybrid Organizations.” Research in Organizational Behavior 34: 81–100. doi:10.1016/j.riob.2014.09.001.
  • Ebrahim, A., and V. K. Rangan. 2014. “What Impact? A Framework for Measuring the Scale and Scope of Social Performance.” California Management Review 56 (3): 118–141. doi:10.1525/cmr.2014.56.3.118.
  • Eikenberry, A. M., and J. D. Kluver. 2004. “The Marketization of the Nonprofit Sector: Civil Society at Risk?” Public Administration Review 64 (2): 132–140. doi:10.1111/j.1540-6210.2004.00355.x.
  • Esteves, A. M., D. Franks, and F. Vanclay. 2012. “Social Impact Assessment: The State of the Art.” Impact Assessment and Project Appraisal 30 (1): 34–42. doi:10.1080/14615517.2012.660356.
  • Fazzi, L. 2012. “Social Enterprises, Models of Governance and the Production of Welfare Services.” Public Management Review 14 (3): 359–376. doi:10.1080/14719037.2011.637409.
  • Gibbon, J., and C. Dey. 2011. “Developments in Social Impact Measurement in the Third Sector: Scaling up or Dumbing Down?” Social and Environmental Accountability Journal 31 (1): 63–72. doi:10.1080/0969160X.2011.556399.
  • Hall, K., R. Miller, and R. Millar. 2016. “Public, Private or Neither? Analysing the Publicness of Health Care Social Enterprises.” Public Management Review 18 (4): 539–557. doi:10.1080/14719037.2015.1014398.
  • Hwang, H., and W. W. Powell. 2009. “The Rationalization of Charity: The Influences of Professionalism in the Nonprofit Sector.” Administrative Science Quarterly 54 (2): 268–298. doi:10.2189/asqu.2009.54.2.268.
  • Julnes, P. D. L., and M. Holzer. 2001. “Promoting the Utilization of Performance Measures in Public Organizations: An Empirical Study of Factors Affecting Adoption and Implementation.” Public Administration Review 61 (6): 693–708. doi:10.1111/0033-3352.00140.
  • Kendall, J., and M. Knapp. 2000. “Measuring the Performance of Voluntary Organizations.” Public Management Review 2 (1): 105–132. doi:10.1080/14719030000000006.
  • Lee, M., and L. Huang. 2018. “Gender Bias, Social Impact Framing, and Evaluation of Entrepreneurial Ventures.” Organization Science 29 (1): 1–16. doi:10.1287/orsc.2017.1172.
  • Lingane, A., and S. Olsen. 2004. “Guidelines for Social Return on Investment.” California Management Review 46 (3): 116–135. doi:10.2307/41166224.
  • Liston-Heyes, C., P. V. Hall, N. Jevtovic, and P. R. Elson. 2017. “Canadian Social Enterprises: Who Gets the Non-earned Income?” VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations 28 (6): 2546–2568. doi:10.1007/s11266-016-9787-z.
  • Long, J. S., and J. Freese. 2006. Regression Models for Categorical and Limited Dependent Variables Using Stata. Second Edition. College Station, Texas: Stata Press.
  • Lyon, F., and H. Fernandez. 2012. “Strategies for Scaling up Social Enterprise: Lessons from Early Years Providers.” Social Enterprise Journal 8 (1): 63–77. doi:10.1108/17508611211226593.
  • Lyon, F., and L. Sepulveda. 2009. “Mapping Social Enterprises: Past Approaches, Challenges and Future Directions.” Social Enterprise Journal 5 (1): 83–94. doi:10.1108/17508610910956426.
  • Maas, K., and C. Grieco. 2017. “Distinguishing Game Changers from Boastful Charlatans: Which Social Enterprises Measure Their Impact?” Journal of Social Entrepreneurship 8 (1): 110–128. doi:10.1080/19420676.2017.1304435.
  • Mason, C. 2012. “Isomorphism, Social Enterprise and the Pressure to Maximise Social Benefit.” Journal of Social Entrepreneurship 3 (1): 74–95. doi:10.1080/19420676.2012.665382.
  • McCulloch, A., J. Mohan, and P. Smith. 2012. “Patterns of Social Capital, Voluntary Activity, and Area Deprivation in England.” Environment and Planning A 44 (5): 1130–1147. doi:10.1068/a44274.
  • McLoughlin, J., J. Kaminski, B. Sodagar, S. Khan, R. Harris, G. Arnaudo, and S. Mc Brearty. 2009. “A Strategic Approach to Social Impact Measurement of Social Enterprises: The SIMPLE Methodology.” Social Enterprise Journal 5 (2): 154–178. doi:10.1108/17508610910981734.
  • Millar, R., and K. Hall. 2013. “Social Return on Investment (SROI) and Performance Measurement: The Opportunities and Barriers for Social Enterprises in Health and Social Care.” Public Management Review 15 (6): 923–941. doi:10.1080/14719037.2012.698857.
  • Mohan, J. 2012. “Geographical Foundations of the Big Society.” Environment and Planning A 44 (5): 1121–1127. doi:10.1068/a44697.
  • Nicholls, A. 2010a. “The Legitimacy of Social Entrepreneurship: Reflexive Isomorphism in a Preparadigmatic Field.” Entrepreneurship Theory and Practice 34 (4): 611–633. doi:10.1111/j.1540-6520.2010.00397.x.
  • Nicholls, A. 2010b. “Institutionalizing Social Entrepreneurship in Regulatory Space: Reporting and Disclosure by Community Interest Companies.” Accounting, Organizations and Society 35 (4): 394–415. doi:10.1016/j.aos.2009.08.001.
  • O’Dwyer, B., and J. Unerman. 2007. “From Functional to Social Accountability: Transforming the Accountability Relationship between Funders and Non-governmental Development Organizations.” Accounting, Auditing & Accountability Journal 20 (3): 446–471. doi:10.1108/09513570710748580.
  • Osborne, S. P., Z. Radnor, and K. Strokosch. 2016. “Co-production and the Co-creation of Value in Public Services: A Suitable Case for Treatment?” Public Management Review 18 (5): 639–653. doi:10.1080/14719037.2015.1111927.
  • Peattie, K., and A. Morley. 2008. “Eight Paradoxes of the Social Enterprise Research Agenda.” Social Enterprise Journal 4 (2): 91–107. doi:10.1108/17508610810901995.
  • Podsakoff, P. M., S. B. MacKenzie, J.-Y. Lee, and N. P. Podsakoff. 2003. “Common Method Biases in Behavioral Research: A Critical Review of the Literature and Recommended Remedies.” Journal of Applied Psychology 88 (5): 879–903. doi:10.1037/0021-9010.88.5.879.
  • Powell, M. 2007. The Mixed Economy of Welfare and the Social Division of Welfare, 1–22. Policy Press: Bristol.
  • Powell, M., A. Gillett, and B. Doherty. 2019. “Sustainability in Social Enterprise: Hybrid Organizing in Public Services.” Public Management Review 21 (2): 159–186. doi:10.1080/14719037.2018.1438504.
  • Powell, M., and S. P. Osborne. 2020. “Social Enterprises, Marketing, and Sustainable Public Service Provision.” International Review of Administrative Sciences 86 (1): 62–79 doi:0020852317751244.
  • Pue, K. 2019. “Selling Out? A Cross-National Exploration of Nonprofit Retail Operations.” Nonprofit and Voluntary Sector Quarterly 48 (3): 0899764019877246. doi:10.1177/0899764019837599.
  • Rothschild, J. 2009. “Workers’ Cooperatives and Social Enterprise: A Forgotten Route to Social Equity and Democracy.” American Behavioral Scientist 52 (7): 1023–1041. doi:10.1177/0002764208327673.
  • Ryan, P. W., and I. Lyne. 2008. “Social Enterprise and the Measurement of Social Value: Methodological Issues with the Calculation and Application of the Social Return on Investment.” Education, Knowledge and Economy 2 (3): 223–237. doi:10.1080/17496890802426253.
  • Saebi, T., N. J. Foss, and S. Linder. 2019. “Social Entrepreneurship Research: Past Achievements and Future Promises.” Journal of Management 45 (1): 70–95.
  • Social Enterprise UK (2017). 2017 State of Social Enterprise. Accessed 15 April 2019. http://sewfonline.com/wp-content/uploads/2017/09/2017-State-of-Social-Enterprise.pdf.
  • Suchman, M. C. 1995. “Managing Legitimacy: Strategic and Institutional Approaches.” Academy of Management Review 20 (3): 571–610.
  • Sud, M., C. V. VanSandt, and A. M. Baugous. 2009. “Social Entrepreneurship: The Role of Institutions.” Journal of Business Ethics 85 (1): 201–216. doi:10.1007/s10551-008-9939-1.
  • Teasdale, S., P. Alcock, and G. Smith. 2012. “Legislating for the Big Society? The Case of the Public Services (Social Value) Bill.” Public Money & Management 32 (3): 201–208. doi:10.1080/09540962.2012.676277.
  • Tracey, P., and O. Jarvis. 2007. “Toward a Theory of Social Venture Franchising.” Entrepreneurship Theory and Practice 31 (5): 667–685. doi:10.1111/j.1540-6520.2007.00194.x.
  • VanSandt, C. V., M. Sud, and C. Marmé. 2009. “Enabling the Original Intent: Catalysts for Social Entrepreneurship.” Journal of Business Ethics 90 (3): 419–428. doi:10.1007/s10551-010-0419-z.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.