6,814
Views
6
CrossRef citations to date
0
Altmetric
Research Articles

Accounting for quality: arts evaluation, public value and the case of “Culture counts”

ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

Arts policy has a longstanding relationship with the concept of “quality” and the ways in which organisations measure, evaluate and account for it. Culture Counts, an evaluation system and digital platform, compiles data from standardised evaluation surveys of different stakeholder groups – organisations, audiences, critics, funders and peers – and provides the means to compare and triangulate data in an accessible format. As a result, it claims to provide a more effective, democratic tool for quality measurement of art, which demonstrates the public value of funding [Department of Culture and the Arts, & Knell, J. (2014). Public value measurement framework: Measuring the quality of the arts. Perth: Department of Culture and the Arts.]. Through qualitative research with two consortia of organisations involved in Culture Counts pilot projects in Manchester, England and Victoria, Australia, we explore these claims, comparing the reception and promotion of the system in both countries and considering its potential incorporation into policy assessment frameworks and adoption within arts organisations’ existing evaluation capacities.

Introduction

In late 2016, the news that the Arts Council of England (ACE) planned to make a standardised quality evaluation system compulsory for funded organisations caused a small flurry of media activity. Although later revoked, the ACE had announced plans to use the “quality metrics framework” provided through the Culture Counts platform (ACE, Citation2016a), and commentators pointed out that in the process of trialling this platform in three countries (the UK, Canada and Australia), arts organisations had been sceptical about its value in measuring quality and concerned about the administrative burden associated with its use (Albert, Citation2016; Hill, Citation2016; Meyrick, Maltby, Phiddian, & Barnett, Citation2016). Culture Counts is a digital application and web portal that collects data on arts and cultural experiences based on standardised metrics. The system aims to allow the arts sector to benchmark the quality of an arts or cultural experience by developing internationally recognised metrics for evaluation, and by supporting the means for data collection, reporting and analysis against these measures.

Since 2011, Culture Counts has received substantial public funding in the UK and Australia to develop a market-ready system as a means through which the quality of organisational delivery can be rated, assured, analysed and improved, thereby delivering greater public value (Knell et al., Citation2015). The decisions to fund the trials and subsequently to consider making the metrics system compulsory for newly funded organisations were motivated by the belief that it will assist arts organisations to improve the quality of their work through better understanding the value of their product. In turn, it is hoped that the system will help organisations to report to funding agencies and to assist the agencies themselves to identify and articulate the value of the arts to society.

The focus of this article is on the users’ experience of the platform and early indications of its value and limitations for policy assessment frameworks. It is not the first research to examine how the platform has been received (e.g. Nordicity, Citation2016). However, the research presented here was conducted across two separate trials of the Culture Counts platform in Victoria, Australia and Manchester, UK, making it the first to critically analyse the reception and potential of Culture Counts in two contexts. One of Culture Counts key ambitions is to accumulate data transnationally and over a long period to allow its users to benchmark against other organisations in the same art-form or with similar audiences: it seems appropriate, then, that critical reviews of the platform are collected transnationally and over a sustained period, and this article is an early step in that direction.

In the first section, we set out the background and methodology of research, before going on to consider its broader context through a literature review concerning quality assessment for publicly funded arts organisations. We then outline research findings that concern the perceived benefits and limitations of Culture Counts from the users’ perspective, and critically examine the claim that use of the system will provide greater public value, before turning to conclusions.

Background and methodology

The standardised metrics underpinning Culture Counts were first developed by the Department of Culture and the Arts (DCA) in conjunction with the arts sector in Western Australia (WA). The DCA aimed to develop a framework to measure the quality, value, impact and reach of arts and cultural activities (DCA & Knell, Citation2014, p. 2). In July 2011, the Department commissioned Pracsys Economics (Australia) and the Intelligence Agency (UK) to develop a system to measure and evaluate the intrinsic value of arts and culture. Following international consultation with artists, funders and academics, they together produced the Public Value Measurement Framework in May 2012, which has since been substantially refined (Chappell & Knell, Citation2012, p. 3). This model identifies three areas of value: intrinsic value, instrumental value and institutional value, drawing substantially on Holden’s work (Holden, Citation2006, Citation2009). It proposed a set of eight core “quality dimensions” through which to measure the intrinsic value specifically: relevance, captivation, originality, distinctiveness, national and global excellence, risk and rigour. The design of the Culture Counts digital platform followed, organising data collection through standard question forms based on these dimensions and providing a comparison of a pre- and post-activity assessment by artists, peers and audiences (DCA & Knell, Citation2014).

The system was then introduced to Manchester in a trial involving 13 arts and cultural organisations (Bunting & Knell, Citation2014). Although this ran parallel to other testing in Australia, the Manchester-based consortia chose not to simply adopt the metrics developed in WA but to co-create their own. Whilst these remained ostensibly the same as in WA, in Manchester, the metrics were allocated to different groups of participants in data collection (so nine metrics were chosen for surveying the public, and a further five were used only for “self” and “peer” assessment, including concept, risk, originality, national excellence and global excellence; see ). The English pilot was declared a success, with “great potential” (Bunting & Knell, Citation2014, p. 69) although the pilot raised a number of issues in relation to the refinement of the metrics, the range of art forms evaluated, the volume of data and the practicalities of implementation.

Table 1. The “Manchester Metrics”.

The research discussed here is based on analysis of two further trials of the system: one in Manchester, the other in the state of Victoria, Australia. There was some variation between the research objectives and methods of the two projects, but both aimed to critically examine the value of Culture Counts, its utility to participating arts organisations and its contribution to public understanding of cultural value.

In May 2014, the Manchester Metrics Group was awarded a Digital R&D Fund for the Arts grant to further develop the metrics framework, as part of the fund’s “Big Data” tranche.Footnote1 The objectives of the project were: to further test the metrics amongst a wider range of art forms and settings and to develop a “self-drive model” targeted at the non-profit market, which organisations could use on their own without the help of consultants. The project also included academic research, led by University of Manchester, on the value and benefits of co-producing metrics and utilising big data approaches to “data-driven decision-making” in the arts, reflecting the specific interests of the Manchester partners and the grant scheme. Research methods comprised a series of workshops, literature review and an online discussion forum organised by the two themes of co-production and big data. The team also observed and participated in test events, and undertook hour-long qualitative interviews with eight participating organisations concerning their experience of trialling the system.

In Victoria, a university-based research team was commissioned alongside a state-funded trial of Culture Counts to examine the experience of 18 participating organisations. Researchers conducted 54 interviews (before, during and after the trial) with management staff, the peer reviewers who contributed survey data, and personnel from Creative Victoria. The organisations were asked about their objectives for participation, previous experience of evaluation tools, their experience of the Culture Counts tool, what they learned about their audience, how and with whom this information was shared, and the organisations’ plans for future adoption and use. Peer reviewers were asked about their experience of contributing data, and Creative Victoria about their aims and experience of the pilot. As with Manchester, the research did not seek to analyse the Culture Counts data itself except to provide background knowledge, because the aim was to identify the perceived value and limitations of the Culture Counts tool to the organisations, to the arts sector and to the funding agency.

Since these two projects, public funds have continued to support further trials. A “free trial” was offered to the ACE (ACE) National Portfolio of arts organisations and major partner museums in 2016, taken up by 150 organisations, with the subsequent announcement that the framework would be rolled out as a mandatory requirement for portfolio organisations at a higher level of funding (ACE, Citation2016b). In Australia (at time of writing), Creative Victoria had not formally determined the success of the trial or what action it would take as a result. However, trialling continues with 15 Queensland arts organisations and the use of the platform has since been explored by a number of other countries world-wide, including Singapore and Canada (Culture Counts, Citation2016).

The following section considers the context for Culture Counts through literature review of current practices of value assessment in the arts, before turning to discuss the research findings from these two pilot projects in more detail.

Data and metrics for quality assessment in the arts

Culture Counts presents a persuasive proposition to a sector that has struggled to agree on methodologies for assessing quality and artistic excellence that can contribute to monitoring requirements for funders and other stakeholders (Bailey & Richardson, Citation2010). At times when public resources are scarce, there is increased interest in how existing data can be used to support benchmarking and assessment. This strategy has been partially exploited for advocacy purposes, for example, in demonstrating the impact of public funding on policy areas (Mowlah, Niblett, Blackburn, & Harris, Citation2014) and also for benchmarking, market segmentation and audience development tools, such as Audience Spectrum (UK), which are dependent on mass participation surveys such as Audience Finder (UK) and Audience Atlas (Australia) and other census and commercial data. The increasing availability of “big data” through transactional and social media data is of interest to the sector and policy makers (Crossick & Kaszynska, Citation2014) whilst there is a tacit recognition that at present data generated through bespoke systems such as Culture Counts is not “big” enough (Gilmore, Arvanitis, & Albert, Citationin press).

Considerable pressure rests on organisations to provide reporting data in every area of their activities. These data contribute to the mechanisms for managing accountability relationships (Chiaravalloti, Citation2016). They are characteristic of the technocracies of New Public Management, which assume there are measurable outcomes from state funding and which contribute to the ways in which these values are articulated (Belfiore, Citation2004; Gray, Citation2002; Zan, Citation2000). Increasingly byzantine and complex, there are multiple and sometimes competing methodologies for evaluation which hope to evidence the “returns on investment” generated by a defensive and instrumental policy field (Belfiore, Citation2012). Stakeholders in these accountability relationships are commonly configured as tri-partite: as “customers”, community/society and those of the professional field (Boorsma & Chiaravalloti, Citation2010), which corresponds broadly to the forms of public value identified by Holden (Citation2006), comprising intrinsic/public, instrumental/policy-oriented and institutional values.

The design of Culture Counts recognises these accountability relationships and attempts to provide objective means for appraising artistic value by surveying the three sides of the triangle (“public”, “peer” and “self”) with standardised questions. This is an unusual approach, to some degree. Most performance evaluation exercises recognise the need to reflect internal appraisal, alongside feedback from external audiences, through formative and process evaluation, and the evaluation of artistic production is routinely scrutinised internally by a range of arts managers (marketing teams, technical staff, artistic directs and programmers) (Chiaravalloti, Citation2016). But it is rare for the appraisal of artistic value to be assessed by internal and external stakeholders via the same methods of data collection and to potentially equal degree.

Mechanisms for improving organisational “data culture” have become central to evaluation practices to promote transparency and accountability in decision-making (Lilley & Moore, Citation2015). Moore (Citation2016) argues that “data-driven decision-making” in the arts is prohibitive and exclusive: excessively technocratic, it requires specialist skills in order to process and handle data and close the “reverential gap” between public, artists, and commissioning institutions (p. 110). Bailey and Richardson propose a data culture which promotes openness to feedback (including subjective opinion) from audiences which supports meaningful conversations internally and with peers, and avoids “box-ticking” for funders’ requirements (Citation2010, p. 303). They argue that artistic self-assessment is primarily for on-going improvement and should draw on the wide range of qualitative and quantitative methods for audience research available from the artistic self-assessment toolbox. The question of what makes assessment “meaningful” is left rather undisclosed, however – whose definition of artistic value has more (or less) meaning? They also suggest that each organisation has bespoke requirements, making benchmarking against other organisations hard to achieve, and undermining the case for universal systems of assessment.

Meaningful involvement of the public within decision-making processes has nominally been encouraged but in some cases actively resisted in the arts (Jancovich, Citation2015, p. 19). A major public consultation by ACE identified that the quality of the artistic experience is as important to the public as it is to organisations, but found different definitions (and anxieties) at play (Bunting, Citation2007). Artistic producers were concerned with the quality of production in terms of whether original objectives were met, whereas artists were concerned with technical articulation and how artworks contributed to a particular art form or larger body of work. By contrast audiences were concerned with emotional response, but showed discomfort at their abilities to assess quality, believing that their own subjective definition of the term would not be as valuable as that of an artistic expert. This anxiety concerning lack of expertise has been found to influence audience tendencies to provide positive responses to interviewers (Johanson & Glow, Citation2015). However, research also suggests qualitative methods allow audiences to define their own metrics (Radbourne, Glow, & Johanson, Citation2010; Radbourne, Johanson, & Glow, Citation2010) to express negative opinions prevented by closed-question surveys (Foreman-Wernet & Dervin, Citation2009) and further enrich their cultural experiences from post-event discussion and exchange (Pitts & Gross, Citation2017).

Carnwaith and Brown’s (Citation2014) commissioned literature review on the intrinsic benefits of arts experiences appraises indicators which might objectively quantify these benefits for advocacy and policy-making. Whilst they encourage organisations to map and project outcome measures onto post-event survey questions, they also identify limitations to post-event surveying which include the limited comparability across types of events, and the short time-scale of impacts that such surveying might capture. In his foreword to the report, Alan Davey, the then Chief Executive of ACE, claims “you can’t tick a box marked profundity” (Carnwaith & Brown, Citation2014, p. 2). Despite this warning, the report recommends quantifiable indicators, signalling that a bureaucratic instinct for measures that objectify intrinsic experience prevails (Gilmore, Citation2014).

In summary, this survey of literature supports the call for methodologies which bring together data from different stakeholder groups, to take into account different accountability relationships, and combine qualitative and quantitative methods. There is also a desire to provide a comparable set of benchmarks through common definitions of what makes something artistically valuable. We have, however, identified a number of critical issues. These include the role of the expert, the involvement of the public in defining value, the efficacy of the system, both in terms of satisfactory data volume and its reporting mechanisms, and the capacity of organisations to participate in complex data cultures with diminished organisational capacities. The next section returns to the findings from the research on Culture Counts with these issues in mind.

Manchester and Victoria user experiences

A key aspect of Culture Counts is its promise to help arts organisations tell their “value stories” based on research and analysis with key stakeholder groups, including audiences and funders, in ways which both improve the sector’s capacity for self-improvement and ensure an element of control of these stories for the sector, by the sector. Giving “control of the language of quality performance assessment” is a key principle of the platform (Culture Counts, Citation2016). In both places, participating organisations reported that Culture Counts provided opportunities for networking with stakeholders and other organisations in the sector. In Victoria, it provided an opportunity to talk to audience members, donors and peers. The notion that Culture Counts was led by the arts sector was very important to Manchester participants and provided a sense of being part of a larger intellectual exercise that they and their peers felt they owned.

In both cases, organisations expressed the view that the use of a scale for the various dimensions provided helped them articulate an account of audience captivation, risk and artistic leadership. Organisations were impressed by the efficient data collection in a standardised form. This was attributed to the digital platform providing a slick and attractive survey instrument, described as “straightforward”, “simple” and “intuitive” for audiences. The platform’s success was, however, highly dependent on the levels of support from the Culture Counts consultants, who acted as an on-demand call desk (but only for the duration of the trials):

Our question moving forward as to whether we’d use this platform or not would be that that level of support – in terms of building a survey, choosing the questions and analysing the data – wouldn’t be included in the standard package, so some of the benefits of using that platform are negated when that support is taken away. (Victorian trial respondent)

As well as working with the consultants who supported the trials, organisations also appreciated the platform’s design that increased audience’s willingness to use the swipe-through pages, and the immediacy of feedback information. There were a few exceptions: e.g. free outdoor events presented difficulties for capturing post-event data immediately after the event. However, on the whole, respondents reported that they found the presentation of the survey results attractive and professional, with easy-to-read data and graphics that would look good when reporting to funders and other stakeholders.

Building evaluation skills and practices

In both trials, organisations identified that part of their value was the development of organisational skills and practices in evaluation. For a number of Victorian organisations, the trial provided opportunities to develop skills in audience evaluation, such as how to frame questions to elicit information from audiences. They reported that the opportunity to “hear from audiences” was an important departure from evaluation tools that simply collected demographic or customer experience data.

The extent to which Culture Counts was perceived to have enhanced organisational skills and capacity depends on the participant organisation’s data culture and the conventions of the art form or experience it delivers. For example, a Victoria respondent noted that Culture Counts was indicative of the galleries sector engaging more strongly with audiences:

[A]t the moment a lot of them are relying on visitor books [for audience data], which is not very helpful at all and it doesn’t help the audience feel the kind of connection that they want to have, that we’ve noticed from our research that they want to have by attending a gallery. (Victorian trial respondent)

A similar response was noted in Manchester relating to long-standing practices of museum visitor research, including staff training and budgets for contracting external evaluators. They suggested this is not the same for performing arts venues:

[T]hey’re not used to capturing data from people as they exit, they just let them go out and they obviously get a lot of information from ticket sales and so it was more of a culture change there. (Manchester trial respondent)

There was also a noticeable distinction make between the use of Culture Counts for quality assessment (in museums) compared to the desire to build baseline audience research (in performing arts and events producers).

Providing objectivity around prior knowledge

In both trials, respondents consistently reported that Culture Counts was more successful in capturing data to confirm existing perceptions and values, than necessarily introducing new insights into the organisations’ activities:

I don’t think it told us anything we wouldn’t have known but for me the value of this has always been in helping organisations and managements to make better decisions. … The ways [the metrics] work is [to judge] whether what you seek to do and how you seek to do it measured against the public and peer expectation of what you’re seeking to do and why you’re seeking to do it. If they tally then one could argue your plan basis is good. (Manchester trial respondent)

The triangulation of data presented in an acceptable format helps to objectify knowledge for these organisations. As the representative of the Manchester Museum articulated in the Manchester Metrics Report, “The most insightful aspect of the data was the congruence between self, peer and public on six of the nine measures, which suggests that we are approaching ‘truth’ on those issues” (Bunting & Knell, Citation2014, p. 40).

Benefits for reporting

One distinction between the responses to the two trials was the ways in which respondents felt the data would improve their reporting to funders. The Australian organisations placed higher value on this aspect: 14 organisations volunteered this as a benefit of the tool. In Manchester, organisations expressed the hope that Culture Counts might help alleviate existing reporting requirements; however, there were also concerns that rather than substitute some of these, Culture Counts might add to the “huge amounts” of data already given to ACE (as the main funder) on a routine basis (Knell et al., Citation2015).

Interestingly, the objectification of artistic quality through quantitative scoring was not an issue for participants; neither were organisations concerned that these aggregate scores could effectively influence funding decisions. There was little probing of the statistical methods, and organisations seemed comfortable with talking about how experiences might be “rated” quantitatively, although it was acknowledged that this may prove a particular issue for newer, riskier work, which had not yet had the time to develop popularity or to have been endorsed through more conventional (discursive) means of critical review. This comfort perhaps reflects the fact organisations were reassured that the metrics could only ever represent one of many evaluation tools. Also, in the UK, the close involvement of organisations in the development of the metrics assured them of sector leadership and ownership, of being in the driving seat, and there was a sense of frustration that a co-production approach involving funders had actively slowed down the process.

The limitations of culture counts

Some organisations identified limitations. These included concerns that the survey restricted the kinds of responses audience members might give, that the survey questions were biased towards positive responses and that the idiosyncrasies of various aspects of their event (e.g. outdoor, one-off events) limits the effectiveness of the method. In Victoria, an Indigenous festival organiser expressed concern that volunteer surveyors were “typically older and predominantly white” whilst the audience for the production was predominantly young and from culturally diverse or Indigenous communities, which may have affected the responses to the survey. This raises questions about the kinds of social and cultural capital different stakeholders bring to their interactions with the platform and how they affect the data that is collected.

There were concerns that the survey tool limited interaction with audiences. For example, a Manchester trial organisation that live streams theatre and other performances into cinemas used their Culture Counts trial to find out more about cinema audiences and augment existing data. However, in terms of providing an assessment of the artistic quality of their work, they pointed to other methods that help them hear from the public and their peers:

We also get a whole load of social media feedback. Every production has a hashtag, every production has a side blog so [this happens] after every show and we have about 115,000 twitter users. (Manchester trial respondent)

As proposed by the literature review above, other methods to understand public experience and valuation of the arts, such as audience exchanges, conversations and other qualitative methods, engender public value by giving audiences an opportunity to interact with each other and with arts organisations. Culture Counts does include some qualitative data collection: in the Manchester trial an open text box allowed participants to add keywords to describe their experience which were aggregated and visualised as a word cloud in the evaluation reports. However, these are limited means, designed to make survey inputting quick and efficient, when compared to more resource-intensive but dialogic methods.

Administering the Culture Counts system places an additional burden on most arts organisations, which tend to be characterised by lean staffing. The resource implication of the recruitment of volunteers for data collection and their training and management were reported as a significant issue, especially for small organisations. The task of data analysis was often considered beyond the current capacity of many organisations. In both Manchester and Victoria, organisations expressed frustration at not being able to use the survey data to full effect. In Victoria, arts organisations have a culture of high staff turnover, and organisations frequently reported the disappearance of staff trained in the delivery and analysis of Culture Counts as a factor that prohibited its integration into planning.

Discussion

To return to the earlier discussion of quality and performance assessment within a broader framework of public value, we now consider the findings of these two studies in relation to two main questions. Firstly, what kinds of accountability relationships are upheld by Culture Counts, and secondly, to what extent can Culture Counts provide a forum for the divergent stakeholders (organisations, peers, audiences and the public) to together contribute to the public value of the arts?

A key innovation of the platform is its combination of the valuation of different stakeholders. As one Manchester interviewee explained, this is a basic principle for artistic programming that sustains popular appeal:

I’ll give you an example, the rubric in arts is that if it’s a poor house, it’s crap marketing, if it’s a full house, it’s a wonderful artistic programme … this interaction between marketing and artistic value is kind of where the heart of the game is at … this is the beauty of the [Culture Counts] system … if yourself and peer [responses] are markedly different from the public assessment then I think you’ve got a problem because you don’t understand your market. (Manchester trial respondent)

This account describes one of the three accountability relationships for arts organisations: between the professional organisations as producers/exhibitors and the public as consumers, as the market, described by Boorsma and Chiaravalloti (Citation2010) as important to performance management in the arts.

However, recent critical approaches to audience development have emphasised the privileging of institutional perspectives within this relationship (Lindelof, Citation2015) and suggest that greater means for participation in artistic decision-making is required for both public value and cultural democracy (Holden, Citation2006; Jancovich, Citation2015). Arguably, through its standard post-event survey methodology and focus on efficient and quantifiable methods, Culture Counts does not promote these means. Rather it provides a platform that manages the relationship between publics and organisations and draws the publics into the same value frameworks as the other stakeholders. In other words, it articulates artistic value in the right language for the organisations and funders. Through a method which is familiar in audience research but updated through its question format and digital slide-bar, the response options to the attitudinal questions are two-fold – high or low, in agreement or disagreement – rather than formative, nuanced or creative.

This translation service is essentially the same function which Culture Counts provides for the accountability relationship between the public and funders/policy makers. It offers an endorsed mechanism for quality assurance that potentially resolves the ongoing quest for quantifiable measures to assess intrinsic experiences (Carnwaith & Brown, Citation2014). Of course, assessing and rating these experiences is not the same as understanding the qualitative meanings people derive from their encounters with artistic experiences; it particularly neglects the interests of potential audiences rather than existing ones. It also undermines the value of experiences and practices defined through everyday participation in cultural distinction and taste-making, over-riding the public’s understanding with institutional stories (Moore, Citation2015) and privileging the institutional perspective (Lindelof, Citation2015). Through a value-driven framework for assessment (Gilmore et al., Citationin press), it runs the risk of reproducing art forms that funding already prioritises (Miles & Gibson, Citation2016). Whilst it may provide the means for public accountability, it does not automatically follow that Culture Counts creates opportunities for public value.

The adoption of the Culture Counts tool in the UK and Australia was facilitated by longer term relationships between the funding agencies and the influential organisations who lobbied for and led the trials. Although these relationships have attracted critical comment (Selwood, Citation2015) and accusations of cronyism (Hill, Citation2016), public funding for an innovation which might make more efficient use of non-profit sector budgets for evaluation to create further public value (claims made during the progress of the trials) is not so unusual. The tool sits within a wider set of public investments aimed at supporting organisations in performance management and assessment. This messy infrastructure includes an array of ever-shifting data collection activities produced by a conglomeration of audience development and arts marketing agencies, consultants and independent evaluators and researchers.

Furthermore, the narrative of Culture Counts has a near-perfect homology with the sector’s developing interest in big data, data culture and the power of digital technologies to understand consumer behaviour and markets (Gilmore et al., Citationin press; Lilley & Moore, Citation2015). As Moore argues, big data have an aura of specialness, potentially widening the gap between the public and experts, “with conventions and access protocols which will be understood by, and accessible to only a small group of specialist data gurus” (Citation2016, p. 108) which when linked with technology have a further, related objectivity.

Conclusion

Research on the two trials presented very similar findings. In both Manchester and Victoria, participating organisations experienced Culture Counts as an efficient mode of data collection in a standardised form that provided opportunities for networking with stakeholders and skills development in evaluation practices. It reinforced their existing knowledge of their audiences, and produced data in a form that provides evidence of impact for funders. The benefits of bringing together data across public, peer and self are borne out by our research respondents: Culture Counts does provide a simple and engaging way of triangulating data to support organisational understanding of the differences between the expectations and experiences of different stakeholder groups at an arts event. Importantly, the use and application of data brought together in this way has the potential to enhance organisational research, although concerns remain where skills and capability are scarce.

Culture Counts was perceived as providing a trusted set of measures to sample opinions from different stakeholder groups, and generally regarded as a well-designed platform to bring these data together and help decision-making. However, it is limited in its capacity to build public understanding of the qualitative meanings people derive from their encounters with art; and in its capacity to incorporate the interests of potential audiences rather than existing ones. Dependent upon a standard post-event survey methodology with a focus on efficient and quantifiable methods, Culture Counts does not promote the means for audiences to participate in artistic decision-making. Rather there is a political dimension, as unearthed in our comparative study, to the embedding of Culture Counts into the assessment regimes of arts organisations. It provides a platform which manages the relationship between publics and organisations and brings them into contact with the same value frameworks and assessments as the other stakeholders in value creation: it articulates artistic value in the language of organisations and funders. As such it works to reinforce art forms which are already prioritised by funding as an ostensibly value-driven, rather than data-driven, tool for decision-making.

Notes

1 The project was jointly funded by the Arts and Humanities Research Council, NESTA and ACE. Digital R&D projects bring together arts, technology and research partners for short projects which explore how digital technologies can be used for business development and audience engagement in the arts. Quality Metrics (later retitled “Culture Metrics”) was the largest in its round. For more details see www.culturemetricsresearch.com.

References