2,635
Views
8
CrossRef citations to date
0
Altmetric
Introduction

Politics of Comparative Quantification: The Case of Governance Metrics

, &
Pages 319-328 | Received 09 Apr 2015, Accepted 09 Jan 2016, Published online: 04 Oct 2016

Introduction

Numbers expose problems, help to institutionalize new domains of decision making, and make complex issues commensurable, giving them a common form. Numbers play a role in the construction of social reality; they help to politicize issues, carry policy ideas from one context to another, and serve as backbones of decision making in politics, administration and jurisprudence. On the other hand, numbers are associated with neutrality, science and expertise, concealing their political character (Porter Citation1996; Arndt and Oman Citation2008). The present special issue explores the relation of politics and expert knowledge in methods that quantify attributes of governance in a comparative format.

The special issue contributes to comparative policy analysis by analyzing places of politics in and behind the most familiar measurements that feature in many comparativists’ work, also published in the JCPA (Tremblaya et al. Citation2003; Marxsen Citation2005; van de Walle Citation2006; Svendsen and Bjørnskov Citation2007; Leea and Whitford Citation2009; Serritzlewa and Svendsen Citation2011; Malekia and Bots Citation2013; Seifert et al. Citation2013; Lin and Yu Citation2014). More specifically, we offer evaluations about the exact processes that result in quantitative variables and the objectification of social phenomena.

Statistics in general, but especially quantitative knowledge on abstract constructions such as democracy, good governance, sustainability and transparency, are not merely neutral or apolitical descriptions of reality. Large numbers are always representations of reality (Desrosierères Citation1998), and thus incomplete perspectives embedded in (inter)subjective signification, strategic calculation, bargaining and practical considerations. Through empirical examination of governance indicators as expert knowledge, the articles in this special issue are premised on an attempt to open up instances where the measurement exercise diverges from the rationalist-apolitical ideal-type. We argue that while numbers make an important and indispensable asset for social research, they are not inherently superior to alternative strategies in providing knowledge on real phenomena.

While numerical knowledge has always had an important governance function, its significance has steadily grown with improved statistical methodology and progress in data collection. Processes of globalization – with social, cultural, economic and political facets – have made governance a matter of considerable complexity, neither the exclusive concern of nation-states nor of intergovernmental organizations. As a consequence, expert knowledge has increased in significance (compare Haas Citation1992). During the last two decades quantitative knowledge – and thus the comparative perspective – has been formally integrated into planning and evaluation of public policies at both the domestic and international levels (compare Power Citation1999; Lee and Kirkpatrick Citation2006). Numbers play an increasingly important role in the construction and discursive framing of political problems and, consequently, possible solutions.

The contributions in this special issue examine indicators and rankings that quantify and gauge attributes related to public governance. Since the early 1990s there has been a surge of international efforts to calculate the performance of states in terms of procedural aspects of governance, such as democracy, corruption and administrative efficiency. More recently, there has been a renewed interest also in the outputs of governance: the established measurements of wealth and development (for example, gross domestic product [GDP] and Human Development Index [HDI]) have been supplemented by attempts to measure the well-being, happiness and prosperity of nations and the sustainability of their policies (for example, the Stiglitz Commission [Stiglitz et al. Citation2009] and the Istanbul Declaration [World Forum on Statistics, Knowledge and Policy Citation2007]). This trend can be depicted in terms of transnational governance: power at the supranational level often subtly works through expert knowledge in governing regional, national and institutional decision making and its structures (Mahon and McBride Citation2009). Indicators and measurements are one type of mechanism that makes transnational governance effective.

Nils Steiner in his article undertakes to find out whether or not the occasional charges against Freedom House’s democracy ratings’ political bias for favoring US allies are warranted. The results of his analysis indicate an affirmative answer, although the estimates are less consistent in the period after 1989. Fabrizio De Francesco in turn takes the strategic orientation of international organizations at face value and examines the different modes of influence the Organisation for Economic Co-operation and Development (OECD) and the World Bank attempt to exert through their governance benchmarks. Out of his empirical cases he constructs an analytical framework that enables others to assess the degree of competitive and collaborative practices that any policy benchmark seeks to assert. Also Katja Freistein, analyzing the World Bank’s poverty measuring tools, sees international organizations as strategic actors that employ numeric knowledge for purposes of governing. But more than De Francesco, she comes to stress the observation that production of indicators has become virtually a necessity for expert organizations to ensure their relevance and ability to operate. Tero Erkkilä concurs in analyzing the entry of new actionable indicators on the global market of governance expertise. While newcomers are in a position to challenge the more established index producers by introducing methodological improvements in the applicability of benchmark data, they nevertheless tend to reproduce existing ideas and practices. Analysis of Freistein and Erkkilä would suggest that developments in the field are not solely a product of rational considerations to produce more valid and reliable information but to perform in a way that consolidates the status of an organization in the field of expertise. Le Bourhis’ description of the French and European endeavors to construct and implement indicator-based frameworks for environmental policy planning highlights the more traditional politicking that may introduce obstacles for the scholarly ideal of impartial measurement.

The Places of Politics

The special issue provides a novel look at a sample of governance measurements in their respective institutional and political context. In some of the articles the focus is primarily conceptual and methodological: what is observed is the politics within the measurements. Since the construct attains a conceptual reinforcement from the recognition and application of the measurement results by researchers, the media and decision makers, it is useful to track the meaning a measurement gives to the measured variable. Here one can look at the input side (conceptualization, attributes, indicators, data) or at the output side (measurement results). Both strategies offer valuable information on ideological and political “biases” that a measure may withhold.

The former strategy looks at the political choices made in the construction of the results, including alternative conceptual and methodological assertions (compare Munck and Verkuilen Citation2002). In a way, each article in this special issue testifies that all socio-political measurements, of which the general public often perceive only the results – the rankings – involve fundamental underlying theoretical decisions and interpretations of social reality. For example, by tracking the process of production of two environmental policy indicators, Jean-Pierre Le Bourhis provides narratives of strategic contestation that result in outcomes very different from those initially envisioned.

Nils Steiner examines the interplay between the input and output sides. His focus is not on the open contestation between different interest groups but on ideological contestation within the measurement itself. Using statistical methods, Steiner is able to show how and to what degree conceptual choices and operational decisions have affected the outcomes of Freedom House’s influential measurement of political rights and social freedoms. His approach relies on an external standard – alternative measurements of democracy – as a benchmark and the potential biases exposed are thus delimited. Nevertheless, this strategy can reveal particular biases in a measurement, as Steiner’s article shows.

While the exposition of measurement bias is in itself interesting, the reasons for its existence should not be ignored. While we can start observing attempts to influence, contest and mobilize knowledge for varying purposes just by conceptual and methodological analysis of single indexes, an analysis juxtaposing several measurements can be even more illuminating in this respect. In this special issue, Tero Erkkilä compares several governance measurements and their incorporation of the attribute of “transparency”. After looking at the ideational terrain of “transparency” that the indexes produce, Erkkilä concludes that the politicization of ranking is leading to more nuanced indicators that might be even more effective in steering policies on the national level.

We can also examine measurements from an agential or tactical point of view, in order to map the relevant actors that take part in measurement in any specific policy field, as well as to examine their motivations, resources and relationships (competition and cooperation), for example, in an attempt to say more about perceived variance in the reception of various index producers’ efforts to “sell” their product for interested audience. In this issue, Fabrizio De Francesco engages with this type of analysis by examining the behavior of the OECD and the World Bank and their strategic choices for policy benchmarking. Here the producers of specialist knowledge are depicted as strategically motivated players, rather than neutral compilers of empirical data.

There is a need to examine the impacts of measurement. An excellent example is provided by Oded Löwenheim (Citation2008), who shows how international good governance measurements serve to discursively reproduce global hierarchies and saddle poor countries, rather than the powerful states and international organizations, with the responsibility of their own (under)development. In this issue, Le Bourhis examines the promotion and resistance on various environmental or sustainable development indicators (SDIs). His analysis sheds light on the discrepancy between the large diffusion of these tools and their limited impacts. In both of his cases, competition between strong coalitions, consisting of actors from governmental, administrative, technical and scientific spheres with differing interests, resulted in “neutralization” of indicators as effective policy tools. Indeed, although a powerful instrument, index knowledge is resisted for various reasons. Le Bourhis shows that no number is considered valid as such. The success of a number from even the most authoritative producer is never simply taken for granted. Much depends on the strength and the extension of the coalition of its supporters and users (scientific, administrative and political).

Overall, the articles in this issue highlight the varying ways governance indexes are in fact politically attuned. In looking at the numbers produced, together with the practices of production, dissemination, and resistance, the authors problematize the privileged epistemic status of quantitative expert data. They thus come to share an important judgment: while there is no escape from expertise being generally associated with quantitative knowledge, and while numbers have powerful uses, we should resist the temptation to treat quantitative knowledge as something more trustworthy than qualitative knowledge. The authors of this special issue are by no means opposed to quantification. Independent of their deeper methodological convictions, that no doubt vary, the authors agree on two points. First, their work explicitly testifies that that numeric information should be treated carefully and it should not be unduly privileged in policy-processes. And second, what can implicitly be derived from the articles is that numeric knowledge – especially in the handy form of international rankings and league tables – provides experts with an influence, both locally and globally, they may not otherwise have had.

In many respects, it may be problematic if transnational policy general level frameworks (and internationally commensurate data) come to define knowledge on local-level phenomena (Erkkilä and Piironen Citation2009). As we have seen, however, there is tension between the index-producing experts (and their organizations) as well as local resistance to numerical mechanisms of governance, many of which aim at trans-local comparability. Paradoxically, it may be more overt politics that come to moderate the political inclinations hidden at the conceptual and methodological levels of measurement.

Comparing Comparisons

Previous analysis on rankings and governance indicators have emphasized the power at play in rankings (Löwenheim Citation2008; Erkkilä and Piironen Citation2009; Journal of International Relations and Development Special Issue October 2012), the methodological aspects of producing rankings (International Public Management Journal Special Issue July 2008; Hague Journal on the Rule of Law Special Issue September 2011), and the objectification through measurements (Culture Unbound Theme Issue 4/2012). The use of country ranking data in explanatory analyses of various kinds is, of course, a given in political science and public administration journals. In addition, the Journal of Comparative Policy Analysis has contributed directly in publishing articles in which governance rankings have been evaluated (Van De Walle Citation2006; Lin and Yu Citation2014) or presented by the constructors (Svendsen and Bjørnskov Citation2007; Chinn and Ito Citation2008; Pujol Citation2009; Seifert et al. Citation2013). In 2014 JCPA published a special issue on corruption and trust. Contributions in the issue highlight concerns of conceptual clarity in studying and measuring these attributes of governance (Fritzen et al. Citation2014, p. 119).

This special issue explores the politics of governance statistics with respect to expert knowledge. The articles highlight the multiple ways in which politics is involved in the measurement of governance attributes. Taken together, the articles also examine a wide array of measurements that diverge in many respects. In the following we propose a loose qualitative framework that may help the reader to classify measurements and indexes. We suggest that it might be helpful to map the differences of measurements at least in terms of: (a) type of producer organization, (b) purpose and governance function, (c) scope, (d) method of data gathering, (e) form of presentation, and (f) visibility or strategy of publication (see ).

Table 1. Framework for classifying measurements

Quantitative comparative governance data is produced – and measurements developed – in different types of organization: universities, public administrations, international organizations, non-governmental organizations (NGOs) and a variety of for-profit organizations and their affiliates. The motives, roles and resources differ vastly within this category, thus connecting it strongly to the other categories of our framework.

Only a handful of purely academic measurements have reached public visibility comparable to those produced by international or private organizations such as the World Bank Institute or World Economic Forum. Arguably, international organizations are the most important producers of transnational governance indices. These organizations have resources, international staff and contacts together with privileged access to information produced within national bureaucracies. The development of measurement in these cases is politically motivated; benchmarks may (de)legitimate policies and methods of implementation, and rankings may thus serve as a basis for resource allocation. As a result, their construction involves reconciliation of stakeholder preferences, political conflicts and compromises, bargaining and trade-offs.

If we bear in mind the organizational variety involved in the ranking industry, it becomes evident that there is no single purpose for governance indicators. As with the transformations in the roles and functions of organizations producing measurements, their purpose has shifted. If the indicators were previously produced almost solely for comparative research, they now more often acquire an instrumental character, being used for resource allocation, administrative development or policy evaluation, as argued by Erkkilä (in this volume). Linked to development economics and aid allocation, there is an active attempt to weigh national governments against an international standard creating pressure to adhere to the norm.

Although the World Economic Forum has no formal authority over nation states, its Global Competitiveness Index has become a yardstick for national economic efficiency. In a similar fashion, Transparency International, an NGO, is raising public awareness of corruption by presenting it as a competition which can help our nations to improve. Investors, in a constant need for up-to-date, easy-to-digest information about destinations relevant for their financial ventures, are a growing consumer of country data. As international flows of money have increased and institutional economics attained a firmer foothold, there is a growing demand for commensurate knowledge concerning public governance.

But as discussed by Freistein (in this issue), production of measurements also has governmental functions which are more subtle than the explicit uses in research and policy making. Examining quantification of poverty by international organizations, Freistein claims that organizations produce numeric knowledge to assert their position – and to build up their identities – as legitimate authorities on the policy domain in which they are active. Erkkilä (in this issue) analyzes the development where new actors are joining the field of global governance measurements as a process of structuration. While the newcomers often enter the field by criticizing the existing knowledge products, they nevertheless come to adopt many of the existing practices and ideas.

Datasets, constructed for varying purposes by a range of organizations, can also be distinguished in terms of their scope. We can form a graded continuum from ones measuring a particular phenomenon (such as the literacy rate of the adult population or corporate tax rate) to those with a broad analytic cover (such as good governance or gender equality). The former are usually more concrete and technical, the operational definition in close approximation to the overarching concept. At one extreme, politics is evident mainly in the decision to measure a thing, not so much in how it is measured or what the referent concerns. At the other extreme, the objects of measurement are abstracted and contested; operationalization calls for interpretation and multiple indicators aggregation – with more or less arbitrary weighing rules. Politics comes in many forms, and in all stages from the initial decision to measure to the presentation and use of the scores.

Another important differentiation by scope relates to political geography, whether the dataset covers several countries or is delimited to one single country. In this issue, the article by Le Bourhis is the only one to include a national dataset, whereas the others look at international datasets where the cases are countries. Nevertheless, while there is a broad consensus about the benefits of wide geographical extension, the same is not true in terms of conceptual extension. According to Erkkilä (in this volume; also Erkkilä and Piironen Citation2014), in the last decade there has been a turn away from single composite scores and aggregate rankings towards policy-specific indicators and disaggregated datasets (see presentation below).

We can divide socio-political measurements into two overlapping categories by the type of indicators they employ ‒ in other words by the main method of data gathering. Objective indicators pertain to empirical facts that – after operationalization – do not depend on additional subjective interpretation. Judgment on whether a country does or does not have UN membership is inscribed into international law; establishing the share of women on corporate boards is relatively straightforward. Indicators whose values are assigned through a process of subjective assessment or interpretation also result in cold numbers that tend to mask the true nature of the data. Subjective indicators are applied when valid and reliable objective data is hard/costly to get, or when it is openly admitted that objective information – generally, or relating to a specific case – is impossible to achieve. In rigid terms, indicators are rarely purely objective, but they all involve subjective assessment of some type.

Indicators that can be deemed clearly subjective come in many varieties, but usually in stakeholder surveys or expert assessments. It may be cost-effective or more reliable to rely on expert assessments, as in producing comparative information about political liberties concerning countries whose official – “objective” – statistics are lacking or not credible. While such methods may bring other types of bias into the analyses (Lin and Yu Citation2014), these can be minimized with various procedures including detailed coding rules, inter-coder assessment procedures and public access to disaggregated data.

Overall, datasets claiming comparative competence all strive for some kind of objectivity. But, indeed, not only should we be wary of the possibility of bias in measurements based on expert assessments, as Steiner’s contribution in this issue suggests, we should be even more suspicious of measurements drawing authority from their use of objective indicators and their seeming neutrality.

Measurements can be distinguished with regard to the presentation of results to aggregated and disaggregated indexes. Aggregated figures, where the different elements measured are grouped into a single figure, are usually used for creating rank orders allowing simple comparison of cases in the highest possible level of abstraction. Many of the early global governance indicators, such as the Freedom House’s Freedom in the World Index, were rankings where the different countries were ranked according to their state of democratic liberties.

Aggregate presentations of measurement results have been criticized for their analytical vagueness (van de Walle Citation2006, p. 440). Moreover, as Tero Erkkilä discusses in his contribution, the presentation of aggregate figures in rank orders has become politically controversial. For instance, the World Bank Institute’s Worldwide Governance Indicators was criticized by certain loan-receiving countries, leading to attempts to create a new set of indicators that was disaggregated when presented. Disaggregated data sets are often called mappings, allowing their users to select the subject of measuring, instead of merely looking at the aggregate number. Many of the governance indicators developed presently are aiming for disaggregated presentation. The recent shift in the style of presentation perhaps also mirrors technical advances: disaggregated mappings are a fitting expression for assessments that are increasingly being presented cartographically with online maps including interactive charts about the countries being analyzed.

The publication strategies of the producer organizations also vary. Some prefer to aim for broad media coverage over their results, whereas others are addressed solely to expert audiences and stakeholders. This is also related to the presentation of the figures, as rankings are more likely to seize the attention of the media, whereas numerical assessments containing disaggregated data are more likely to be discussed only within policy expert circles.

Of course, certain topics gain more media visibility than others. Corruption makes headline news while “freedom of the press” is not necessarily equally interesting to broad audiences. The OECD Programme for International Student Assessment comparisons on education systems have attracted much attention not only because they address one of the key institutions in society, but also because they potentially concern anyone who has attended basic education.

Finally, organizational role matters. As an NGO, Transparency International may publish its ranking on corruption with a lot of audacity, but the OECD and the European Union are already in a more difficult position, and trying to avoid political controversies with their member states sets a more reserved tone and form of presentation.

Summary

The contribution of this special issue to comparative policy analysis is to increase our sensitivity to potentially political aspects of quantification and measurement. The articles provide theoretical perspectives for understanding the processes of quantitative governance assessment including conceptual, methodological and contextual issues. International datasets on governance have come to supplement domestic statistics, registries and quantitative assessments during the last two decades. Knowledge about politico-administrative structures, organizations and cultures are transformed into commensurate, standardized formats, allowing simple description and analysis by mathematical methods. As much as – and often more than – domestic measures are tuned to catch local intricacies, international datasets are in many ways enmeshed with politics even if that is not intended by their producers. We find politics in conceptualizations, operationalization, measurement methods and the presentation of results. We see traces of political standpoints not only in the numbers presented and methodologies that are used, but also when we examine the producers and users of governance data, as well as the motives, networks, and those who benefit and those who lose. While the show must go on, the contributors to this issue of the journal take a deep breath, pause for a moment, and attempt to understand where we – surrounded by numbers – stand at the moment.

Acknowledgments

Previous versions of the contributions to this special issue were presented at the ECPR General Conference in Reykjavik (2011) and Glasgow (2014). We would wish to thank the reviewers of the submitted articles.

Additional information

Funding

Tero Erkkilä and Ossi Piironen’s work was supported by the Academy of Finland.

Notes on contributors

Tero Erkkilä

Tero Erkkilä is Associate Professor of Political Science at the University of Helsinki. His research interests include transnational governance, public institutions and collective identities.

B. Guy Peters

B. Guy Peters is Maurice Falk Professor of American Government at the University of Pittsburgh, and President of the International Public Policy Association. He works in governance theory, public policy and comparative public administration.

Ossi Piironen

Ossi Piironen works as a Senior Researcher at the Ministry for Foreign Affairs of Finland. He has previously worked as a doctoral researcher at the University of Helsinki. His interests include politics of quantification, governance and governmentality, and theories of democracy.

References

  • Arndt, C. and Oman, C., 2008, The politics of governance ratings. Working paper MGSoG/2008/WP003, Maastricht Graduate School of Governance, Maastricht University.
  • Chinn, M. D. and Ito, H., 2008, A new measure of financial openness. Journal of Comparative Policy Analysis: Research and Practice, 10(3), pp. 309–322. doi:10.1080/13876980802231123
  • Desrosierères, A., 1998, The Politics of Large Numbers: A History of Statistical Reasoning (Cambridge: Harvard University Press).
  • Erkkilä, T. and Piironen, O., 2009, Politics and numbers: The iron cage of governance indices, in: R. W. Cox III (Ed.) Ethics and Integrity in Public Administration: Concepts and Cases (Armonk: M.E. Sharpe), pp. 125–145.
  • Erkkilä, T. and Piironen, O., 2014, (De)politicizing good governance: The World Bank institute, the OECD and the politics of governance indicators. Innovation: The European Journal of Social Science, 27(4), pp. 344–360.
  • Fritzen, S. A., Serritzlew, S. and Svendsen, G. T., 2014, Corruption, trust and their public sector consequences: Introduction to the special edition. Journal of Comparative Policy Analysis: Research and Practice, 16(2), pp. 117–120. doi:10.1080/13876988.2014.896124
  • Haas, P. M., 1992, Introduction: Epistemic communities and international policy coordination. International Organization, 46(1), pp. 1–35. doi:10.1017/S0020818300001442
  • Lee, N. and Kirkpatrick, C., 2006, Evidence-based policy-making in Europe: An evaluation of European Commission integrated impact assessments. Impact Assessment and Project Appraisal, 24(1), pp. 23–33. doi:10.3152/147154606781765327
  • Leea, S.-Y. and Whitford, A. B., 2009, Government effectiveness in comparative perspective. Journal of Comparative Policy Analysis: Research and Practice, 11(2), pp. 249–281. doi:10.1080/13876980902888111
  • Lin, M.-W. and Yu, C., 2014, Can corruption be measured? Comparing global versus local perceptions of corruption in east and southeast Asia. Journal of Comparative Policy Analysis: Research and Practice, 16(2), pp. 140–157. doi:10.1080/13876988.2013.870115
  • Löwenheim, O., 2008, Examining the state: A foucauldian perspective on international ‘governance indicators’. Third World Quarterly, 29(2), pp. 255–274. doi:10.1080/01436590701806814
  • Mahon, R. and McBride, S., 2009, Standardizing and disseminating knowledge: The role of the OECD in global governance. European Political Science Review, 1(1), pp. 83–101. doi:10.1017/S1755773909000058
  • Malekia, A. and Bots, P. W. G., 2013, A framework for operationalizing the effect of national culture on participatory policy analysis. Journal of Comparative Policy Analysis: Research and Practice, 15(5), pp. 317–394.
  • Marxsen, C. S., 2005, Russia in Olson’s template: Regulation, corruption and environmental idealism. Journal of Comparative Policy Analysis: Research and Practice, 7(3), pp. 249–255. doi:10.1080/13876980500209348
  • Munck, G. L. and Verkuilen, J., 2002, Conceptualizing and measuring democracy: Evaluating alternative indices. Comparative Political Studies, 35(1), pp. 5–34. doi:10.1177/0010414002035001001
  • Porter, T. M., 1996, Trust in Numbers (Princeton: Princeton University Press).
  • Power, M., 1999, The Audit Society: Rituals of Verification (Oxford: Oxford University Press).
  • Pujol, F., 2009, an unconventional composite index of international influence. Journal of Comparative Policy Analysis: Research and Practice, 11(1), pp. 145–157. doi:10.1080/13876980802648342
  • Seifert, J., Carlitz, R. and Mondo, E., 2013, The open budget index (OBI) as a comparative statistical tool. Journal of Comparative Policy Analysis: Research and Practice, 15(1), pp. 87–101. doi:10.1080/13876988.2012.748586
  • Serritzlewa, S. and Svendsen, G. T., 2011, Does education produce tough lovers? Trust and bureaucrats. Journal of Comparative Policy Analysis: Research and Practice, 13(1), pp. 91–104. doi:10.1080/13876988.2011.538543
  • Stiglitz, J. E., Sen, A. and Fitoussi, J.-P., 2009, Report by the Commission on the Measurement of Economic Performance and Social Progress. Commission on the Measurement of Economic Performance and Social Progress. Available at http://www.stiglitz-sen-fitoussi.fr/documents/rapport_anglais.pdf ( accessed 9 April 2015).
  • Svendsen, G. T. and Bjørnskov, C., 2007, How to construct a robust measure of social capital: Two contributions. Journal of Comparative Policy Analysis, 9(3), pp. 275–292. doi:10.1080/13876980701494699
  • Tremblaya, R. C., Nikolenyia, C. and Otmara, L., 2003, Peace and conflict: Alternative strategies of governance and conflict resolution. Journal of Comparative Policy Analysis: Research and Practice, 5(2–3), pp. 125–148. doi:10.1080/13876980308412697
  • van de Walle, S., 2006, The state of the world’s bureaucracies. Journal of Comparative Policy Analysis: Research and Practice, 8(4), pp. 437–448. doi:10.1080/13876980600971409
  • World Forum on Statistics, Knowledge and Policy, 2007, Istanbul Declaration. Available at http://www.oecd.org/newsroom/38883774.pdf ( accessed 9 April 2015).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.