4,729
Views
15
CrossRef citations to date
0
Altmetric
Original Articles

Transparency for Results: Testing a Model of Performance Management in Open Government Initiatives

ABSTRACT

Government transparency continues to challenge existing frameworks for understanding organizational performance. Transparency has proven difficult to measure and results assessing its impacts are mixed. This article sets forward a model of performance-based accountability in open government initiatives. Data come from the Open Government Partnership’s (OGP) database of over 1,000 transparency initiatives across 50 countries in 2013. Ordered logistic regression estimates the effect of management practices on three different measures of transparency performance, and the results broadly support the model. Expert interviews from two country cases offer insight into how performance management is used in the context of transparency reforms.

This article takes up the claim of the so-called “modern” vision of computer-mediated transparency that information and communication technologies (ICT) increase the amount and quality of information so that public organizations become more rational (Meijer, Citation2009; Schillemans, Van Twist, & Vanhommerig, Citation2013). This vision of transparency, which is often implicit in recent scholarly works that integrate transparency and accountability, has close affinities with the performance-based accountability tradition in public administration, and the goals of the new public management (NPM) movement (Bolívar & Galera, Citation2016; Bolívar, Peres, and López-Hernández, Citation2013; Mulgan, Citation2014; Reenstra-Bryant, Citation2010). However, the character of this affinity between performance-based accountability and transparency performance has not yet been carefully examined. Prior works have certainly observed that there is an affinity, sometimes arguing that the role of transparency in NPM reforms is at odds with legalist notions of democratic processes, political and economic fairness, and rights-based notions of democratic accountability (Halachmi & Greiling, Citation2013). Hood (Citation2006) argued that a potential danger of increased transparency in government is that politicians will be encouraged to play a blame avoidance game where performance information will be used to shirk responsibility rather than committing to important long-term goals. However, in other streams of literature, such as performance-based accountability, a vital role for transparency is frequently highlighted for performance measurement (e.g., Landow and Ebdon, Citation2012; Bianchi & Rivenbark, Citation2012; Moynihan & Ingraham, Citation2003). It is argued that without transparency in government, the performance milestones and criteria being used to assess those milestones cannot be observed by monitoring agencies and citizens and therefore the benefit of accountability on improving performance would have no real purchase (Ingraham & Moynihan, Citation2001). But are the theories empirically supported? More specifically, does performance measurement in transparency initiatives improve the openness of government or is it a futile and potentially harmful endeavor?

One way to begin addressing the claims of the perspective of transparency and performance-based accountability is to assess whether the adoption by transparency initiatives of performance-based accountability leads to improved performance. This article attempts to open up the “black box” of management practices in transparency initiatives by developing and testing a model of transparency performance. It develops the model using existing models of performance-based accountability, and models of the accountability process of transparency. By testing whether transparency initiatives that use performance-based accountability are more successful, this work can help understanding, firstly, how government transparency fits as a strategic management approach within MFR (managing for results) frameworks and, secondly, how specific management practices may improve outcomes in transparency initiatives.

There is no shortage of existing research supporting the a priori benefits of transparency (e.g., Harrison et al., Citation2012; Janssen, Charalabidis, & Zuiderwijk, Citation2012; Lathrop & Ruma, Citation2010). But without detailed study of internal management practices there is a risk that design of transparency reform could be poorly informed. According to some scholars, lack of empirical knowledge about transparency initiatives means they are more likely to function as symbols, rhetorical devices in politics, or as Panglossian views of technological solutions to administrative and democratic problems (Grimmelikhuijsen, Citation2012; Hansson, Belkacem, & Ekenberg, Citation2015; Moynihan & Ingraham, Citation2003). The arrival of a type of transparency policy called “open government” has increased discussion around the topic of how transparency initiatives actually deliver the benefits of public accountability that they claim (Yu & Robinson, Citation2012). Open government seeks to advance transparency through a broader range of programs involving fostering citizen participation, accountability, and democratic processes from information and communication technologies (Harrison et al., Citation2012; Lee & Kwak, Citation2012; Lourenço, Piotrowski, and Ingrams, Citation2015). The transparency programs in open government include areas such as budget openness and freedom of information (FOI) (e.g., Birkinshaw, Citation2002; Worthy, Citation2010), public participation in government innovation or public policies and town hall style online forums for public comment around regulations or legislative proposals (Beierle & Konisky, Citation2000; Bishop & Davis, Citation2002; Evans & Campos, Citation2013; Linders, Citation2012), and better transparency in relationships between government agencies and third-party organizations in the private and nonprofit sectors (e.g., Peled, Citation2011; Scholl & Klischewski, Citation2007).

A second motivation for the research on the performance-based accountability of transparency initiatives is that the fields of public management theory and public policy performance are rich in research on internal management design in many other important areas of public administration action and decision-making, but scholarship is lacking this knowledge in transparency policy. Numerous works have investigated the managerial processes associated with job satisfaction (e.g., Yang & Kassekert, Citation2009), successful organizational change (e.g., Fernandez & Rainey, Citation2006; Greenwood & Hinings, Citation1996; Markus & Robey, Citation1988), privatization (e.g., Auger, Citation1999; Hefetz & Warner, Citation2004; Mintzberg, Citation1996), collaboration (e.g., Emerson, Nabatchi, and Balogh, Citation2012; Imperial, Citation2005; Waugh & Streib, Citation2006), and innovation (e.g., Berry & Berry, Citation1999; Ebbers & Van Dijk, Citation2007). Weil et al. (Citation2006) argued that transparency policies become effective only once they are embedded in everyday practices of decision-making by citizen-users and manager-disclosers of information. However, their study focused on transparency in commercial and industry regulation, and this perspective needs to be extended to transparency policies in government.

In order to address this shortage of research on management practice and transparency performance, this article develops a theoretical framework from public management literature and puts forward hypotheses about the performance-based management practices associated with successful open government policy. The hypotheses are then tested using data from the OGP with a series of regression models and an exploration of two cases of open government initiatives.

Theoretical framework and hypotheses

Management strategies play an important role in shaping the outcomes of public policies and programs. This article uses the MFR framework from the Government Performance Project (GPP), adapting four MFR practices described by Moynihan and Ingraham (Citation2003): (1) Performance milestones, (2) measurability, (3) strategic coordination, and (4) goal clarity. The literature review is used to develop the GPP framework into a measurement model that relies not only on MFR research, but also on transparency research and related literature on e-government (Bolívar, Peres, and López-Hernández, Citation2015; Brown, Citation2005) and network governance (Edelenbos, Klijn, & Steijn, Citation2011; Sørensen & Torfing, Citation2009), as these areas are subsystems of transparency policies and programs.

Performance milestones

According to Moynihan and Ingraham (Citation2003, p. 475), “Leaders and managers use results data for policy making, management, and evaluation of progress”. Performance milestones provide reference points for assessing the existing results in relation to expected results, and are therefore essential for performance evaluation (Behn & Kant, Citation1999). Milestones should focus on the whole range of program attributes from inputs, activities, outputs, intermediate outputs, and end outcomes, but there needs to be shared understanding in the organization of how these milestones relate to the notion of what good performance should look like (Behn & Kant, Citation1999; Wholey, Citation1999), and an agreed set of benchmarks to assess the degree to which performance milestones have been met (Nyhan & Martin, Citation1999). According to Sørensen and Torfing (Citation2009, p. 249), milestones focus actors on the “production of outputs in terms of reports, conferences, plans, policy proposals and direct interventions.” A further benefit of milestones is to engender a sense of achievement among participants that can be celebrated and used as motivation (Behn, Citation2003).

H1:

Transparency initiatives that use performance milestones will achieve higher performance.

Measurability

According to Moynihan and Ingraham (Citation2003), governments need to develop “indicators and evaluative data that can measure progress toward results and accomplishments”. There is a close relationship between performance and specificity of measurement because monitoring and verification of performance outcomes is more likely to be achieved if there are clear points of reference by which to measure progress (Holzer & Halachmi, Citation1996; Zia & Koliba, Citation2011). As well as having an organizational decision-making process on what to measure, performance measurement involves decisions by managers of a policy or program to have clear indicators to measure program success (Baehler, Citation2003; Julnes & Holzer, Citation2001; Kravchuk & Schack, Citation1996), and having a long-term plan for targets that expect to be reached within a specific timeframe (Lapuente & Nistotskaya, Citation2009). Different types of program need to be clearly distinguished so that appropriate measures can be matched to the program rather than relying on a broad, one-size fits all approach (Behn, Citation2003). Another reason for ensuring careful matching between measurement and expected results is that measurement may encourage cheating or blame avoidance behavior by public officials (Charbonneau & Bellavance, Citation2012).

Measurability also provides levers of control for managers by providing knowledge of the link between micro-level practices and outcomes (Heinrich, Citation2002; Hellberg & Hedström, Citation2015; Lapuente & Nistotskaya, Citation2009; Zuiderwijk & Janssen, Citation2014) in order that organizational action can be unified around those outcomes (May & Winter, Citation2007; Meijer, Citation2015). Measurability has been noted as a management practice that could have an especially notable impact on outcomes in ICT-supported areas of policy such as e-government where interorganizational learning and interoperability are highly complex (Morgeson & Mithas, Citation2009).

H2:

Transparency initiatives that use specific and measurable performance indicators will achieve higher performance.

Strategic coordination

Performance management involves strategic coordination when there are agreed plans of action coordinated between agencies and central leadership (Moynihan & Ingraham, Citation2003). According to Poister and Streib (Citation1999, p. 308) strategic management involves areas of coordination such as “focusing attention across functional divisions and throughout various organizational levels on common goals, themes, and issues.” Previous scholars have identified an important link between coordination among government agencies and accountability (Page, Citation2004). However, the link is conditional on structural factors such as whether there is a high or low degree of control, with low control seeming to be the most effective (Radin & Romzek, Citation1996), and having political backing by being a central or influential agency (Page, Citation2004). The link also benefits from transparency of information so that coordinating organizations can see what the other is doing (Gormley & Weimer, Citation1999; Ingraham & Moynihan, Citation2001). Coordination does also have problems such as increasing transaction costs, but the strategic part of the coordination involves specifying agreed roles and responsibilities for decision-making (Sørensen & Torfing, Citation2009).

H3:

Transparency initiatives that are coordinated between a central political agency and another agency will achieve higher performance

Goal clarity

A final ingredient in the evaluation criteria discussed by Moynihan and Ingraham (Citation2003) is goal clarity, which concerns the clarity of communication between government and stakeholders regarding stipulated activities and results. According to Beisheim et al. (Citation2014, p. 664), “[t]asks, rules, and commitments need to be defined clearly and precisely, that is, unambiguously, to reduce the room for interpretation”. When the scope of a task is clear, the steps linking the process to the goals of the task are easier to follow and the goal is more likely to be achieved (Beierle & Konisky, Citation2000). Understanding of organizational goals is essential to organizational performance because it allows coordination among organizational levels (May & Winter, Citation2007; Meijer, Citation2015; Simon, Citation2000). By designing clear goals, organizations are able to create a coherent strategy, and create effective, measurable outcomes (Chun and Rainey, Citation2005; Julnes & Holzer, Citation2001; Kravchuk & Schack, Citation1996). Without clear organizational goals, riskier strategic choices are more likely (Bozeman & Kingsley, Citation1998), and organization-wide detrimental effects can result from the lack of cognitive clarity such as low motivation (Jung, Citation2014; Wright, Citation2004). Such organizational performance detriments are also therefore expected to be negatively associated with the performance of open government initiatives, while clear goals are expected to be positively associated with open government initiatives.

More specifically, the goals of open government should include public participation (Meijer, Curtin, & Hillebrandt, Citation2012; Piotrowski and Liao, Citation2012) and accountability (Lourenço, Piotrowski, and Ingrams, Citation2015; Yu & Robinson, Citation2012). It is therefore expected that open government initiatives that have clear goals for accountability and public participation would be more highly performing than initiatives that do not have clear goals. As open government reforms have been strongly supported by advances in ICT applications in organizations (Grimmelikhuijsen, Citation2012; Hansson et al., Citation2015; Harrison et al., Citation2012), it is also expected that open government goals focusing on technology adoption and information sharing such as open data and websites are more likely to perform highly.

H4:

Transparency initiatives with clear goals are more likely to achieve higher performance.

Transparency performance

Public organizational performance is a complex phenomenon that should be operationalized carefully both by practitioners and scholars who realize that how performance is measured strongly shapes understanding of the relationships under investigation. In this research, performance is measured in three different ways in order to obtain a broad concept of organizational success and to test differences among these different measurements. The three ways are: (1) progress of OG initiatives in meeting schedule, (2) the potential of the initiative to create valuable impacts, and (3) effectiveness, which is a combination of “(1)” and “(2)” measuring initiatives that have potential to create valuable impacts and were also being completed in fact. The impacts of initiatives are a vital part of performance, but are easily overlooked in performance evaluation (Yang & Holzer, Citation2006).

However, performance evaluation should go beyond impacts to address the ability of administrators to comply with what they say they will do (O’Toole, Citation1997; Ringquist, Citation1995). According to Scholz (Citation1991) program effectiveness is strongly determined by both the ability of administrators to implement goals (progress compliance) and to control their outcomes (impacts). Therefore, it makes sense to provide a measure of the performance of transparency initiatives both in terms of how good the outcomes are expected to be and how successful administrators have been in keeping to the implementation schedule.

Using the foregoing theoretical discussion, a conceptual model of performance measurement in open government can now be formed (). The causal flow of the diagram is adapted from the MFR framework of Moynihan and Ingraham (Citation2003). However, here, the performance management factors—strategic coordination, measurability, milestones, and clear goals—operate across the organizational subsystems of transparency, which are ICT infrastructure, e-government, public participation, and collaboration. These factors and the subsystems comprise the “black box” of management, the transparency capacity that supports the performance of transparency initiatives.

Figure 1. Conceptual model of performance management in open government.

Figure 1. Conceptual model of performance management in open government.

The transparency capacity enables the program delivery of transparency initiatives. After that, the program leads to the actual performance of the initiative, which is measured in the next step. This performance measurement process continues to feed back into the circular process of informing program progress and planning; the four practice areas of transparency capacity are informed by results of the program and re-evaluated within the transparency subsystems. This conceptual model is a hypothetical model and the remainder of this article addresses empirical testing through operationalization of performance measurement factors of the model, the data sources, and results of the analysis.

Data and methods

Two sources of data—one quantitative and one qualitative—are used in order to triangulate results and provide a more robust test of methodological validity (Creswell, Citation2013). Both data sources come from the OGP, a steadily growing international multilateral organization that is composed of 75 countries from around the world that commit to goals to improve transparency. A country can join the OGP so long as it meets a minimum standard for commitment to government transparency and media freedom such as having a formal right to information law and protections for civil society organizations and journalists. Quantitative performance data are created in the following way: Every two years as part of the OGP’s Independent Reporting Mechanism (IRM), an official assessor is selected for each country to evaluate the country’s success in achieving the initiatives set out in the National Action Plan (NAP) of the country. The IRM assessor is a contracted professional, normally from an academic, journalistic, or consulting background, with experience in performance evaluation. A centralized training course is provided to all the IRM assessors and there is an official coding manual used to help standardization of the coding process across countries. In order to ensure a high standard in the assessment, the OGP uses an expert panel of internationally respected academics and policymakers to oversee the quality management process. Assessment involves a range of tools intended to reach a diversity of views on the performance of the initiatives such as structured and semi-structured interviews, questionnaires, focus groups, and document analysis. The IRM process involves a criteria-based form of performance evaluation. Criteria-based assessments, “offer an understandable, transparent set of standards that can assess and compare a high number of governments” (Moynihan and Ingraham, Citation2003, p. 471). Scholars have identified some pitfalls in the criteria-based approach to performance measurement (Ingraham & Moynihan, Citation2001). One issue is the question of who is to be the judge that the selected criteria are the “right” ones to use for measurement. The other argument against criteria-based assessment is that they are circular because they are set up to look only for the management practices implied in the criteria. However, criteria-based assessments can be reliable if they are selected using a thorough process, incorporate a diversity of views, and are amenable to change if better criteria become apparent (Ingraham & Moynihan, Citation2001). Ingraham and Moynihan (Citation2001) also point out that the way that the criteria are fulfilled is as important as the fact of whether they have been fulfilled.

There is considerable flexibility in the number and type of initiatives that OGP countries may include in their NAP. For example, an initiative from one country simply states that it comprises “Involvement of the civil society representatives in the elaboration of draft legislation of public interest, organisation of public hearings by state institutions” (Azerbaijan, Citation2014). Another initiative expanded over some 100 words aims to establish a “‘Smart Region’, as a way to organise public services more efficiently and create a more open and participative public sector as a driver for innovation” (Progress Report, 2012–2013, Denmark, Citation2013). The IRM assessor evaluates the performance of the country in achieving the commitments laid out in the NAP and delivers the final progress report with quantitative and qualitative assessments of the NAP’s success in delivering its goals. The performance of each individual policy commitment is evaluated by the IRM, and it is these individual policies that are the unit of analysis used in this article. shows the descriptive statistics and the measurement items from the IRM evaluation. The data all come from the first progress IRM reports of the 50 countries that produced reports in the year 2012–2013, and evaluated by the IRM assessors in 2014. For the dependent variable, the IRM scores the level of initiative progress using five levels (canceled, no progress, limited progress, substantial progress, and completion). The impact score is assessed on four levels (no impact, limited impact, moderate impact, or transformative impact). The variable for effectiveness was operationalized in this article by multiplying the impact and progress scores, which leads to a score between 1 and 12. For the independent variables, binary measures are used to assess whether strategic coordination, milestones, and clear goals were used. To be classed as strategic coordination, an initiative must be implemented jointly by at least two government agencies and have an officially designated agency leader responsible for coordination. To be classed as milestone initiative, the initiative must designate multiple points in time when specific stages in the initiative progress need to be accomplished. To be classed as having a clear goal, the initiative’s officially written goal must specifically address one of four open government values: accountability, public participation, technology innovation, or new information sharing. Each type of goal is estimated separately as a dummy variable. Finally, the fourth independent variable, measurability was estimated as an ordinal 3-point scale varying from no measurability, to specifying measurable outputs, to specifying measurable outputs with specific indicators.

Table 1. Descriptive statistics.

To test the article’s hypotheses using the OGP data, an ordinal logistic regression model was estimated. This type of model is suited for estimating statistical parameters of models with dependent variables that are binary or ordinal. Three models were estimated with each model using a different dependent variable (1. Progress; 2. Potential impact; 3. Effectiveness) but keeping the same independent variables. The country variable was clustered to control for the correlation between initiatives within countries. Four dummy variables were included in the model for the four possible kinds of open government goal used to categorize initiatives. Not all initiatives are deemed relevant to any of the four open government goals, and so it was possible to use this group of irrelevant initiatives as the referent group.

To further explore the results of the quantitative analysis, a qualitative method of analysis was used, which involved zooming in on case studies of two OGP initiatives: (1) a police transparency initiative in the United States; and (2) an open healthcare initiative in the United Kingdom (for official description of the initiatives, see Appendix). These cases were selected to provide richer detail of the kinds of performance-based management practices that are used in two cases of governments that typically perform highly in transparency (Armstrong, Citation2005). The United Kingdom has been a global leader in the management for results movement (Pollitt, Citation1996), and it tends to score highly on indicators of transparency quality such as the World Bank’s governance rankings (Hood, Citation2006). The United States is similarly viewed as a global leader; it too has been a vocal advocate of the management for results movement, and has a rich culture of citizen participation and accountability (Vigoda, Citation2002). Both countries were founding members of the OGP in 2011. Open government initiatives in the two areas of policy—law enforcement and healthcare—were chosen so that the policy areas would have strong “everyday” relevance to citizens in those countries rather than being merely technical areas of transparency work with low public media interest or political salience. Health service provision and law enforcement are “life or death” types of transparency policy areas in De Fine Licht’s (Citation2014) sense of a critical policy area. Both these areas of policy are also prominent, even controversial, political topics in the respective countries. This selection aims to address a concern highlighted in scholarly literature that transparency policies frequently do not engage with important concerns and relevant policy interests of citizens (Piotrowski & Van Ryzin, Citation2007).

In each country case, a snowball sample technique was used to identify interviewees. Staff at the OGP headquarters in Washington, D.C., provided contact information for three individuals from the two governments who were senior decision-makers in the design of the initiatives. After interviewing this initial core, these individuals recommended other suitable interview candidates, who had worked on the initiative from within either a public agency or a civil society organization. There was one requirement specified in identifying candidates: they must be senior managers with decision-making authority in the design and implementation of the initiative. The researchers also tried to find a balance in the number of government and civil society interviewees. The interviews were carried out by phone and took place between May and August 2016. In total there were 35 people interviewed: 22 for the US initiative (10 civil society and 12 government); and 13 for the UK initiative (5 civil society and 8 government). The interviews were semi-structured. All the interviewees were asked the predetermined question of “What factors can help their open government initiative to be successful?”, and the interviewer used follow-up questions to elicit further details regarding the practices used in the initiative. Follow-up questions were designed to encourage the interviewees to expand on unclear statements rather than guiding the interviewee in a specific topic direction. For example, follow-up questions such as “could you explain that further?” or “could you provide an example?” By starting with a general question about success factors, the interview approach aimed to give the respondents freedom to say what they wanted rather than narrowing the conversation to a specific area of performance management behavior preselected by the researcher (Berg & Lune, Citation2012).

Results

Regression analysis

The results of the regression model (presented in ) show large coefficients for the independent variables of measurability across all the models, and in Models 1 and 2 for milestones, and Model 2 for Goal clarity. In addition to the result for goal clarity in general, the analysis of specific kinds of goals reveals that accountability and technology goals are positively associated with impact and effectiveness but not initiative progress. One performance management practice, strategic coordination, is not significant in any of the models, suggesting that strategic coordination does not have an effect on the likelihood of open government initiatives performing better. The sizes of the coefficients show log of odds ratios ranging between 0.081 (measurability in Model 2) and 0.724 (goal clarity in Model 2). All models show strong overall reliability with the Wald chi-square estimates being highly significant. The Pseudo R2 values are small, ranging between 0.022 and 0.057. This suggests that performance management factors of open government initiatives do not explain much of the variance in the outcome of having initiatives. There must be other explanatory factors that are involved.

Table 2. Ordinal logistic regression results (beta coefficients and standard errors).

Among the dummy variables for the type of goal involved in the open government initiatives, only initiatives that are designed to address the role of technology or accountability to promote open government are significantly more effective than open government initiatives that do not address any specific open government goal at all across all the models. Initiatives with technology- or accountability-related open government goals are significantly more likely to be successful in Models 2 and 3 compared to initiatives that are not relevant to any open government goal at all. Initiatives designed with clear goals in terms of information or participation are not associated with any of the performance outcomes.

In the statistical decisions are shown. Only one of the variables, strategic coordination, is not significant in any of the models and so the hypotheses for each of the models are rejected. The hypothesis for the relationship between milestones and performance is accepted for Model 1 and Model 2 for the dependent variables of progress and effectiveness, respectively. Measurability is accepted in all of the models and goal clarity is accepted only in Model 2 for performance in terms of potential impact.

Table 3. Decisions on empirical hypotheses based on p-value estimates.

Case study interviews

If I was a police chief I would engage community about what information needs to be provided. Actually engage the community to address community problems such as drink driving. Make data evaluation to bring people in. Use data in new and interesting ways. But then you also need to track outcomes that you want to achieve and make sure what differences it has made.

The quote above is one example of the way that performance measurement was used in the cases of open government initiatives. The interview respondents voiced frequent support for the invaluable role of managerial accountability and performance measurement. In the view of many of the interviewees, managers whom the respondents considered to have the best approach to achieving their goals were those who were prepared to let facts and information convey the “reality” of the work of public officials rather than covering over or polishing facts that they feared might reflect negatively on their reputation. For example, one interviewee said that “the department owned that data and did not remove it or keep it away from public. Being willing to be accountable is crucial.” Another interviewee emphasized the internal benefits of goal clarity: “to create more communication for data internally. When you are open with information it creates transparency for personnel to stay in touch about what is happening and that can produce better understanding.” These ways, in the case of the police initiative were the best way to “to improve trust, bring better insight and analysis to policing efforts”, as described in the wording of the initiative. The use of performance information meant, according to one respondent, that “we saw the impact and effect of improvement.”

However, the respondents also emphasized that an indiscriminate release of information was counterproductive. Rather, managers needed to be specific about their performance targets and ensure that information on actual practice is matched to those targets. One police chief said that his department had “developed a measuring tool on how to prioritize release of different kinds of information. They have 600 datasets in the inventory so we need to demonstrate which to release.” While the respondents in the health initiative did not mention strategic coordination with other departments, they did consistently say that they relied upon the collaboration of partners from academia, civil society, and tech companies as well as their local communities to help them define what the performance targets should be. Goal-setting was thus a strongly shared and coordinated process; it required both participation in defining goals and a rigorous and bold policy of openness that allowed third parties to do independent analysis to decide for themselves whether performance had been adequate or that some failing needed to be solved by calling out a specific department or responsible police chief. One respondent recognized that managers should recognize that the source of targets may ultimately be political, but that what mattered was finding the way to achieve it that was reasonable to all parties: “The government’s interest is often to use data to partner with organizations and then help them to manage use of data. The data was then analyzed to apply to various political goals, but at least it could lead to government and civil society having a specific aim with a clear pipeline.” Technical aspects of information selection were also important. The following commentary on knowing what kinds of performance information to release was typical: “There are also level issues with incident level information that has personally identifiable information including domestic violence or assault. This makes it particularly hard in police data.”

While the majority of views of the respondents corroborate the findings of the regression analysis that performance management makes a difference in transparency initiatives, views on performance-based accountability in open government were sometimes negative in cases where the initiative was not designed properly. One interviewee said that

the armchair auditor idea was supposed to be a model for holding government accountable. It implies some antagonism. It is actually quite boring for citizens. You either get a loud internet troll who just causes problems or it’s experts that use it. However, this model can be useful if correctly developed.

The problems with performance measurement occurred because metrics were not set correctly in the first place. One interviewee said that “measurement of open data is often about downloads and superficial metrics. There are other measurements around re-use and quality data.”

Discussion of findings and limitations

Research on transparency has been marked by two countervailing perspectives. On the one hand, the theory of transparency is marked by optimism and normative assumptions about its positive relationship with many core public administration values such as efficiency, citizen participation, and accountability. On the other hand, research has found that delivering these values is conditional on administrative, technical, and political variables; transparency does not occur in a vacuum. Moreover, transparency as a political reform concept is prone to exaggerated optimism and political rhetorical strategies. To begin addressing this gap between theory and practice, this article employs the insights of performance-based accountability theory to hypothesize a range of practices that are related to high performing transparency programs. The results show broad support for the empirical model of transparency performance, and a range of performance management practices including using measurability by distinguishing specific areas of action and matching them with specific and measurable outcomes; performance milestones that provide a way to evaluate progress and assess results; and goal clarity that provides more purposeful action through better communication and coordination. These are aspects of performance-based accountability that have been widely used in public administration literature, but they have not before been developed into a framework for understanding the management of transparency initiatives. The variation among the results for different kinds of performance outcome—progress, impact, and effectiveness—also reveals the importance of distinguishing different measures of open government performance. While some management practices such as performance milestones may help open government initiatives to make progress, they may not help initiatives be more effective. This makes logical sense, given that milestones are designed to help policymakers to find a route through an initiative from start to finish, but milestones may not advance the long-term effectiveness. Similarly, goal clarity by focusing and clarifying how an open government initiative can make a difference may have more influence on the impact of the initiative rather than its progress.

The research here can help advance program evaluation in the area of transparency policies. Measurement of program performance is a complex and multifaceted topic in public administration literature, and transparency initiatives are no different. Indeed, another unique problem with the practice and theory of transparency initiatives is the way that adoption of ICT is often viewed by governments as a de facto step toward progress. In order to address this problem, the model was estimated separately for three different dependent variables for progress, potential impact, and effectiveness. Hypotheses 4 and 5 directly addressed expected differences in the outcomes of open government initiatives based on scholarly research on the relative likelihood of making progress on information and technology-related initiatives compared to the more challenging initiatives that focus on goals of public participation and public accountability. The results suggest a new finding that, while initiatives that promote accountability and technology are viewed as having better potential impacts, and being more effective, they are not more likely to make progress towards completion. As public participation and information are not associated with performance outcomes, it seems that previous findings regarding the difficulty of balancing public participation with effectiveness in achieving outcomes (e.g., Dahl, Citation1994; Montpetit, Citation2008; Skelcher & Torfing, Citation2010) are equally challenging in the context of transparency. Therefore, decision-makers must do more to support initiatives in those areas. Similarly, previous research identified difficulties for agencies in adopting coordination as a strategy to mitigate fragmentation. While NPM reforms viewed fragmentation as a chief evil to address, open government reforms seek to create openness and accountability, which may have very little bearing on fragmentation. Other recent work finds that, in the age of open government, strategic coordination and interorganizational collaboration are more a goal of open government rather than a way of achieving more openness (McDermott, Citation2010). Coordination can even hamper performance by increasing complexity of the interorganizational relationships (Lee & Kwak, Citation2012). Therefore, strategic coordination may not be a positive influencer of performance as it was in earlier performance management reforms.

Several limitations should be mentioned regarding this study. Firstly, as the observations in the OGP data are cross-sectional, the estimated parameters cannot be used as conclusive evidence of the causal relationships suggested by the theory in this article. While the theoretical framework for the article presents a wide range of previous findings that the authors find gives a persuasive empirical case for the theory, it is possible that rival explanations may be more empirically robust, and so further research is needed to provide corroboration. Relatedly, while the theoretical framework used to generate empirical hypotheses is well established in scholarly literature, a further reason to be cautious in interpreting the results of the study is that the pseudo R2 scores are very low, varying between 0.022 in Model 1 and 0.057 in Model 2. While low R2 does not undermine the statistical interpretation of confidence intervals, it does mean that there is a large amount of statistical variance left unexplained by the models. Variables not measured here that relate to the macro-political, environmental, and administrative context also play an important role in the performance of open government initiatives (Janssen et al., Citation2012). The analysis of the interviews of managers of open government initiatives offers a qualitative reference point to triangulate the results from the regression analysis. The contribution of the case study results is to show evidence of the existence of performance management being used in a global case of open government initiatives, but also to explore the details of the character of performance management practices used in countries with high transparency capacity. But the results from the interviews, which are particularistic and non-generalizable, cannot be used to offer direct support or lack of support for the hypothesized relationships, which involve estimates of probability.

Conclusion

Open government is a growing area of public policy and institutional reform in many countries around the world. As it is a new area, it lacks clear frameworks for understanding the management practices that improve the outcomes of transparency initiatives. Not only are open government initiatives a new area of transparency policy, but they present new challenges to government in important core values such as accountability, efficiency, participation, and transparency. This article started at the outset with the research question of how technocratic process reform with highly abstract notions of output can be accommodated within traditional models of performance management design. A series of hypotheses was generated from a literature review of performance management from transparency research as well as related fields of e-government and network governance theory. Regression estimates testing the model find that a performance model is largely supported. Performance management factors such as measurability, performance milestones, and goal clarity are all supported. While goal clarity is a significant factor, clarity of goals on areas such as accountability and technology is more likely to help foster progress in meeting scheduled targets.

The findings present practical opportunities that can be taken up by administrators, civil society organizations, and citizens advocating for the development of more effective transparency. The article has demonstrated that the role of a specific set of performance management practices plays a role in the performance of modern transparency initiatives. It therefore advances scholarly knowledge of transparency as an administrative reform area rather than simply a collection of disparate policy practices such as freedom of information or open data. However, the research, while important, provides an incomplete picture of transparency performance. More research is needed on the macro-level factors that can be integrated with the findings here to build a more comprehensive picture of transparency policy success.

References

  • Armstrong, E. (2005). Integrity, transparency and accountability in public administration: Recent trends, regional and international developments and emerging issues. United Nations. Retrieved from http://unpan1.un.org/intradoc/groups/public/documents/UN/UNPAN020955.pdf
  • Auger, D. A. (1999). Privatization, contracting, and the states: Lessons from state government experience. Public Productivity & Management Review, 22(4), 435–454.
  • Azerbaijan (2014). Independent Reporting Mechanism (IRM) Progress Report 2014-2015: Azerbaijan. Washington DC: The Open Government Partnership.
  • Baehler, K. (2003). Evaluation and the policy cycle. Evaluating policy and practice: A New Zealand reader. Auckland, NZ: Pearson Prentice Hall.
  • Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public Administration Review, 63(5), 586–606. doi:10.1111/puar.2003.63.issue-5
  • Behn, R. D., & Kant, P. A. (1999). Strategies for avoiding the pitfalls of performance contracting. Public Productivity & Management Review, 22(4), 470–489.
  • Beierle, T. C., & Konisky, D. M. (2000). Values, conflict, and trust in participatory environmental planning. Journal of Policy Analysis and Management, 19(4), 587–602. doi:10.1002/(ISSN)1520-6688
  • Beisheim, M., Liese, A., Janetschek, H., & Sarre, J. (2014). Transnational partnerships: Conditions for successful service provision in areas of limited statehood. Governance, 27(4), 655–673. doi:10.1111/gove.2014.27.issue-4
  • Berg, B., & Lune, H. (2012). Qualitative research methods for the social sciences. London, UK: Pearson.
  • Berry, F. S., & Berry, W. D. (1999). Innovation and diffusion models in policy research. In Sabatier, P., & Weible, C. (Eds.). Theories of the Policy Process. Boulder, CO: Westview Press.
  • Bianchi, C., & Rivenbark, W. C. (2012). A comparative analysis of performance management systems: The cases of Sicily and North Carolina. Public Performance & Management Review, 35(3), 509–526. doi:10.2753/PMR1530-9576350307
  • Birkinshaw, P. (2002). Freedom of information in the UK and Europe: Further progress? Government Information Quarterly, 19(1), 77–86. doi:10.1016/S0740-624X(01)00097-1
  • Bishop, P., & Davis, G. (2002). Mapping public participation in policy choices. Australian Journal of Public Administration, 61(1), 14–29. doi:10.1111/ajpa.2002.61.issue-1
  • Bolívar, M. P. R., & Galera, A. N. (2016). The effect of changes in public sector accounting policies on administrative reforms addressed to citizens. Administration & Society, 48(1), 31–72. doi:10.1177/0095399713498751
  • Bolívar, M. P. R., Pérez, M. D. C. C., & López-Hernández, A. M. (2013). Online budget transparency in OECD member countries and administrative culture. Administration & Society, 47(8), 943–982.
  • Bozeman, B., & Kingsley, G. (1998). Risk culture in public and private organizations. Public Administration Review, 58, 109–118. doi:10.2307/976358
  • Brown, D. (2005). Electronic government and public administration. International Review of Administrative Sciences, 71(2), 241–254. doi:10.1177/0020852305053883
  • Charbonneau, E., & Bellavance, F. (2012). Blame avoidance in public reporting: Evidence from a provincially mandated municipal performance measurement regime. Public Performance & Management Review, 35(3), 399–421. doi:10.2753/PMR1530-9576350301
  • Chun, Y. H., & Rainey, H. G. (2005). Goal ambiguity and organizational performance in US federal agencies. Journal of Public Administration Research and Theory, 15(4), 529–557.
  • Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications.
  • Dahl, R. A. (1994). A democratic dilemma: System effectiveness versus citizen participation. Political Science Quarterly, 109(1), 23–34. doi:10.2307/2151659
  • De Fine Licht, J. (2014). Policy area as a potential moderator of transparency effects: An experiment. Public Administration Review, 74(3), 361–371. doi:10.1111/puar.12194
  • Denmark (2013). Independent Reporting Mechanism (IRM) Progress Report 2012–2013 Denmark. Washington, DC: The Open Government Partnership.
  • Ebbers, W. E., & Van Dijk, J. A. (2007). Resistance and support to electronic government, building a model of innovation. Government Information Quarterly, 24(3), 554–575. doi:10.1016/j.giq.2006.09.008
  • Edelenbos, J., Klijn, E. H., & Steijn, B. (2011). Managers in governance networks: How to reach good outcomes? International Public Management Journal, 14(4), 420–444. doi:10.1080/10967494.2011.656055
  • Emerson, K., Nabatchi, T., & Balogh, S. (2012). An integrative framework for collaborative governance. Journal of public administration research and theory, 22(1), 1–29.
  • Evans, A. M., & Campos, A. (2013). Open government initiatives: Challenges of citizen participation. Journal of Policy Analysis and Management, 32(1), 172–185. doi:10.1002/pam.2013.32.issue-1
  • Fernandez, S., & Rainey, H. G. (2006). Managing successful organizational change in the public sector. Public Administration Review, 66(2), 168–176. doi:10.1111/puar.2006.66.issue-2
  • Gormley, W. T., & Weimer, D. L. (1999). Organizational report cards. Cambridge, MA: Harvard University Press.
  • Greenwood, R., & Hinings, C. R. (1996). Understanding radical organizational change: Bringing together the old and the new institutionalism. Academy of Management Review, 21(4), 1022–1054.
  • Grimmelikhuijsen, S. (2012). Linking transparency, knowledge and citizen trust in government: An experiment. International Review of Administrative Sciences, 78(1), 50–73. doi:10.1177/0020852311429667
  • Halachmi, A., & Greiling, D. (2013). Transparency, e-government, and accountability: Some issues and considerations. Public Performance & Management Review, 36(4), 562–584. doi:10.2753/PMR1530-9576360404
  • Hansson, K., Belkacem, K., & Ekenberg, L. (2015). Open government and democracy: A research review. Social Science Computer Review, 33(5), 540–555. doi:10.1177/0894439314560847
  • Harrison, T. M., Guerrero, S., Burke, G. B., Cook, M., Cresswell, A., Helbig, N., … Pardo, T. (2012). Open government and e-government: Democratic challenges from a public value perspective. Information Polity, 17(2), 83–97.
  • Hefetz, A., & Warner, M. (2004). Privatization and its reverse: Explaining the dynamics of the government contracting process. Journal of Public Administration Research and Theory, 14(2), 171–190. doi:10.1093/jopart/muh012
  • Heinrich, C. J. (2002). Outcomes-based performance management in the public sector: Implications for government accountability and effectiveness. Public Administration Review, 62(6), 712–725. doi:10.1111/puar.2002.62.issue-6
  • Hellberg, A. S., & Hedström, K. (2015). The story of the sixth myth of open data and open government. Transforming Government: People, Process and Policy, 9(1), 35–51. doi:10.1108/TG-04-2014-0013
  • Holzer, M., & Halachmi, A. (1996). Measurement as a means of accountability. International Journal of Public Administration, 19(11–12), 1921–1943. doi:10.1080/01900699608525173
  • Hood, C. (2006). Gaming in targetworld: The targets approach to managing British public services. Public Administration Review, 66(4), 515–521. doi:10.1111/j.1540-6210.2006.00612.x
  • Imperial, M. T. (2005). Using collaboration as a governance strategy lessons from six watershed management programs. Administration & Society, 37(3), 281–320. doi:10.1177/0095399705276111
  • Ingraham, P. W., & Moynihan, D. P. (2001). Comparing management systems and capacity: The benefits of a criteria-based approach. Paper presented at the 5th Annual IRSPM Conference, April, Barcelona, Spain.
  • Janssen, M., Charalabidis, Y., & Zuiderwijk, A. (2012). Benefits, adoption barriers and myths of open data and open government. Information Systems Management, 29(4), 258–268. doi:10.1080/10580530.2012.716740
  • Julnes, P. D. L., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation. Public Administration Review, 61(6), 693–708. doi:10.1111/0033-3352.00140
  • Jung, C. S. (2014). Why are goals important in the public sector? Exploring the benefits of goal clarity for reducing turnover intention. Journal of Public Administration Research and Theory, 24(1), 209–234.
  • Kravchuk, R. S., & Schack, R. W. (1996). Designing effective performance-measurement systems under the government performance and results Act of 1993. Public Administration Review, 56(348–358). doi:10.2307/976376
  • Landow, P., & Ebdon, C. (2012). Public-private partnerships, public authorities, and democratic governance. Public Performance & Management Review, 35(4), 727–752. doi:10.2753/PMR1530-9576350408
  • Lapuente, V., & Nistotskaya, M. (2009). To the short‐sighted victor belong the spoils: Politics and merit adoption in comparative perspective. Governance, 22(3), 431–458. doi:10.1111/gove.2009.22.issue-3
  • Lathrop, D., & Ruma, L. (2010). Open government: Collaboration, transparency, and participation in practice. Boston, MA: O’Reilly Media.
  • Lee, G., & Kwak, Y. H. (2012). An open government maturity model for social media-based public engagement. Government Information Quarterly, 29(4), 492–503. doi:10.1016/j.giq.2012.06.001
  • Linders, D. (2012). From e-government to we-government: Defining a typology for citizen coproduction in the age of social media. Government Information Quarterly, 29(4), 446–454. doi:10.1016/j.giq.2012.06.003
  • Lourenço, R. P., Piotrowski, S., & Ingrams, A. (2015). Public accountability ICT support: A detailed account of public accountability process and tasks. In Electronic Government. New York, NY: Springer International Publishing. 105–117.
  • Markus, M. L., & Robey, D. (1988). Information technology and organizational change: Causal structure in theory and research. Management Science, 34(5), 583–598. doi:10.1287/mnsc.34.5.583
  • May, P. J., & Winter, S. R. C. (2007). Collaborative service arrangements: Patterns, bases, and perceived consequences. Public Management Review, 9(4), 479–502. doi:10.1080/14719030701726473
  • McDermott, P. (2010). Building open government. Government Information Quarterly, 27(4), 401–413. doi:10.1016/j.giq.2010.07.002
  • Meijer, A. (2009). Understanding modern transparency. International Review of Administrative Sciences, 75(2), 255–269. doi:10.1177/0020852309104175
  • Meijer, A. (2015). Government transparency in historical perspective: From the ancient regime to open data in the Netherlands. International Journal of Public Administration, 38(3), 189–199. doi:10.1080/01900692.2014.934837
  • Meijer, A. J., Curtin, D., & Hillebrandt, M. (2012). Open government: Connecting vision and voice. International Review of Administrative Sciences, 78(1), 10–29. doi:10.1177/0020852311429533
  • Mintzberg, H. (1996). Managing government, governing management. Harvard Business Review, 74(3), 75.
  • Montpetit, É. (2008). Policy design for legitimacy: Expert knowledge, citizens, time and inclusion in the United Kingdom’s biotechnology sector. Public Administration, 86(1), 259–277. doi:10.1111/padm.2008.86.issue-1
  • Morgeson, F. V., & Mithas, S. (2009). Does E‐government measure up to E‐Business? Comparing end user perceptions of US federal government and E‐business web sites. Public Administration Review, 69(4), 740–752. doi:10.1111/puar.2009.69.issue-4
  • Moynihan, D. P., & Ingraham, P. W. (2003). Look for the silver lining: When performance‐based accountability systems work. Journal of Public Administration Research and Theory, 13(4), 469–490. doi:10.1093/jopart/mug032
  • Mulgan, R. (2014). Making open government work. London, UK: Palgrave Macmillan.
  • Nyhan, R. C., & Martin, L. L. (1999). Comparative performance measurement: A primer on data envelopment analysis. Public Productivity & Management Review, 22(3), 348–364.
  • O’Toole, L. J., Jr. (1997). Treating networks seriously: Practical and research-based agendas in public administration. Public Administration Review, 57, 45–52. doi:10.2307/976691
  • Page, S. (2004). Measuring accountability for results in interagency collaboratives. Public Administration Review, 64(5), 591–606. doi:10.1111/puar.2004.64.issue-5
  • Peled, A. (2011). When transparency and collaboration collide: The USA open data program. Journal of the American Society for Information Science and Technology, 62(11), 2085–2094. doi:10.1002/asi.v62.11
  • Piotrowski, S., & Liao, Y. (2012). The usability of government information. In Schachter, H. L. (Ed.). The State of Citizen Participation in America. Charlotte, NC: Information Age Publishing.
  • Piotrowski, S. J., & Van Ryzin, G. G. (2007). Citizen attitudes toward transparency in local government. The American Review of Public Administration, 37(3), 306–323. doi:10.1177/0275074006296777
  • Poister, T. H., & Streib, G. (1999). Performance measurement in municipal government: Assessing the state of the practice. Public Administration Review, 59(4), 325–335.
  • Pollitt, C. (1996). Antistatist reforms and new administrative directions: Public administration in the United Kingdom. Public Administration Review, 56, 81–87. doi:10.2307/3110058
  • Radin, B. A., & Romzek, B. S. (1996). Accountability expectations in an intergovernmental arena: The national rural development partnership. Publius: The Journal of Federalism, 26(2), 59–81. doi:10.1093/oxfordjournals.pubjof.a029855
  • Reenstra-Bryant, R. (2010). Evaluations of business improvement districts: Ensuring relevance for individual communities. Public Performance & Management Review, 33(3), 509–523. doi:10.2753/PMR1530-9576330310
  • Ringquist, E. J. (1995). Political control and policy impact in EPA’s office of water quality. American Journal of Political Science, 39, 336–363. doi:10.2307/2111616
  • Rodríguez Bolívar, M. P., del Carmen Caba Pérez, M., & López-Hernández, A. M. (2015). Online budget transparency in OECD member countries and administrative culture. Administration & Society, 47 (8), 943–982.
  • Schillemans, T., Van Twist, M., & Vanhommerig, I. (2013). Innovations in accountability: Learning through interactive, dynamic, and citizen-initiated forms of accountability. Public Performance & Management Review, 36(3), 407–435. doi:10.2753/PMR1530-9576360302
  • Scholl, H. J., & Klischewski, R. (2007). E-government integration and interoperability: Framing the research agenda. International Journal of Public Administration, 30(8–9), 889–920. doi:10.1080/01900690701402668
  • Scholz, J. T. (1991). Cooperative regulatory enforcement and the politics of administrative effectiveness. American Political Science Review, 85(01), 115–136. doi:10.2307/1962881
  • Simon, H. A. (2000). Bounded rationality in social science: Today and tomorrow. Mind & Society, 1(1), 25–39.
  • Skelcher, C., & Torfing, J. (2010). Improving democratic governance through institutional design: Civic participation and democratic ownership in Europe. Regulation & Governance, 4(1), 71–91. doi:10.1111/j.1748-5991.2010.01072.x
  • Sørensen, E., & Torfing, J. (2009). Making governance networks effective and democratic through metagovernance. Public Administration, 87(2), 234–258. doi:10.1111/j.1467-9299.2009.01753.x
  • Vigoda, E. (2002). From responsiveness to collaboration: Governance, citizens, and the next generation of public administration. Public Administration Review, 62(5), 527–540. doi:10.1111/puar.2002.62.issue-5
  • Waugh, W. L., & Streib, G. (2006). Collaboration and leadership for effective emergency management. Public Administration Review, 66(s1), 131–140. doi:10.1111/j.1540-6210.2006.00673.x
  • Weil, D., Fung, A., Graham, M., & Fagotto, E. (2006). The effectiveness of regulatory disclosure policies. Journal of Policy Analysis and Management, 25(1), 155–181.
  • Wholey, J. S. (1999). Performance-based management: Responding to the challenges. Public Productivity & Management Review, 22(3), 288–307.
  • Worthy, B. (2010). More open but not more trusted? The effect of the freedom of information Act 2000 on the United Kingdom central government. Governance, 23(4), 561–582. doi:10.1111/gove.2010.23.issue-4
  • Wright, B. E. (2004). The role of work context in work motivation: A public sector application of goal and social cognitive theories. Journal of Public Administration Research and Theory, 14(1), 59–78. doi:10.1093/jopart/muh004
  • Yang, K., & Holzer, M. (2006). The performance–trust link: Implications for performance measurement. Public Administration Review, 66(1), 114–126. doi:10.1111/puar.2006.66.issue-1
  • Yang, K., & Kassekert, A. (2009). Linking management reform with employee job satisfaction: Evidence from federal agencies. Journal of Public Administration Research and Theory, 20(2), 413–436. doi:10.1093/jopart/mup010
  • Yu, H., & Robinson, D. (2012). The new ambiguity of “open government.”. UCLA Law Review Disclosure, 59, 178–208.
  • Zia, A., & Koliba, C. (2011). Accountable climate governance: Dilemmas of performance management across complex governance networks. Journal of Comparative Policy Analysis: Research and Practice, 13(5), 479–497. doi:10.1080/13876988.2011.605939
  • Zuiderwijk, A., & Janssen, M. (2014). Open data policies, their implementation and impact: A framework for comparison. Government Information Quarterly, 31(1), 17–29. doi:10.1016/j.giq.2013.04.003

Appendix 1

The open healthcare initiative had two parts which were defined as follows:

Part 1:

NHS England will work with governments and civil society organisations internationally to create an online space to share experiences of embedding high quality standards into information, with a view to building an accreditation scheme to enable citizens and organisations to assess their progress.

Part 2:

NHS England will be improving the quality and breadth of information available to citizens to support them to participate more fully in both their own health care and in the quality and design of health services which will result in greater accountability of NHS England.

(Second National Action Plan of the United Kingdom, 2013)

The police open data initiative was defined as follows:

In response to recommendations of the President’s Task Force on 21st Century Policing, the United States is fostering a nationwide community of practices to highlight and connect local open data innovations in law enforcement agencies to enhance community trust and build a new culture of proactive transparency in policing. The Office of Science and Technology Policy and the Domestic Policy Council have been working on the Police Data Initiative in collaboration with Federal, state, and local governments and civil society to proactively release policing data, including incident level data disaggregated by protected group. This work aims to improve trust, bring better insight and analysis to policing efforts, and ultimately co-create solutions to enhance public safety and reduce bias and unnecessary use of force in policing.

(Third National Action Plan of the United States, 2015)