114
Views
0
CrossRef citations to date
0
Altmetric
ANZCA 2022 - "Communicating through chaos: Emerging Research"

Faith in Australian numbers: new pressures on public data communication in Australia as a result of the COVID-19 pandemic

ORCID Icon
Pages 200-212 | Received 30 Apr 2024, Accepted 02 May 2024, Published online: 06 Jun 2024

ABSTRACT

Before the pandemic, Australia was already increasingly reliant on statistics for political debate. This was reflected not only in high levels of trust in statistical institutions, but in the privileged position of ‘data-driven decisions’ by politicians and the public. From a series of interviews across the public service, academia, media and politics, this article describes the exacerbated conditions of professionals working with public data in Australia during the pandemic, detailing both actual and perceived aspects of new pressures. Public health communication that emphasised data-driven decisions led to public demand for data access, increasing the work of scientists, public servants, researchers, and journalists to refine and communicate data releases and commentary more frequently – which in turn increased demand for more data from all. The acute pandemic pressure also revealed pre-existing issues with data management and infrastructure, and caused previously siloed professionals to stretch across multiple roles. Such acceleration necessitates less time and expertise available for interpretation or careful management. Alternatively, failure to accelerate can result in real or perceived gaps in knowledge that can discourage trust in state statistics. These reflections are critical, as laborious stopgap solutions – intended to be temporary in a crisis – seem to have settled in to stay.

‘The only numbers I have total faith in are the Australian numbers’. – Brendan Murphy, then-Chief Health Officer for Australia, April 2020

Introduction

Australia’s already high reliance on statistics and numerical data for policy-making skyrocketed during the COVID-19 pandemic response. Prior to the pandemic, Australia was already one of the forefront nations for evidence-based policy (Banks, Citation2009), a model of policy-making which in practice privileges statistical and other quantitative evidence over other forms (O’Dwyer, Citation2004; Porter, Citation2012, p. 18). But in the unknown and uncertain policy landscape of the pandemic, there was both increased demand from decision-makers for more up-to-date evidence, as well as increased demand on decision-makers from the public to commit to decisions based on evidence over ideology (Vilkins, Raman, & Grant, Citation2020). Proof of this commitment was sought by demands for transparency and releasing statistical modelling and other processes publicly, and consistently. This was a rising tension in the early days of the Australian Government pandemic response, as then Prime Minister Scott Morrison continued to make reference to modelling that informed his decision-making in press statements (Taylor, Citation2020), but had not released this for public validation. Demand for this modelling to be made publicly accessible – both to prove that decisions were data-driven, and to allow independent checks – gained momentum across both citizenry and professional associations. The Australian Academy of Science released an official statement which called upon the Australian Government to ‘make public the scientific evidence that is informing its thinking’ (Shine, Citation2020), and encouraged greater reliance on this scientific evidence in the Government’s decision-making.

Australia’s COVID Zero management approach was prominent in global coverage for its success (Haseltine, Citation2021), which was made possible by our capacity and skill in gathering and utilising statistical records. The epigraph above, spoken by Australia’s Chief Health Officer in April 2020, emphasises the reliance of Australia’s pandemic management response on quantitative data. The quote as spoken was in response to suspicions of other nation’s COVID-19 case records and how such distrust affected Australia’s decisions to impose travel bans. The Chief Health Officer emphasised that, under such uncertainty, the only figures he could rely on was the nation’s own. This faith in numbers therefore refers dually to the accuracy of our COVID-19 statistics, and to the successful management they reflect. Underlying this is a pre-existing high trust in their statistical operations. The Australian Bureau of Statistics has previously released reports on public trust in both the Bureau and their product, increasing from an already high 81% in 2015 to 87% in 2020. Not only in this explicit sense, Australians’ trust in statistics is also reflected in their high response rates to statistical operations such as the census in comparison to other countries (ENGINE, Citation2020).

In the pandemic, Australia’s high pre-existing trust combined with the sudden increased need for certain and authoritative data, as well as the seeming success of this data’s application in policy. The success of numbers formed a feedback loop with their demand, increasing the privileging of statistics in policy-making and public debate, incentivising the production and discussion of more numbers. This is common in the use of statistical evidence. As Gorur commented in the context of global education statistics, ‘not only are numbers trusted more but more numbers are trusted more’ (Gorur, Citation2016, p. 665), in the sense of both complex, large-scale operations, as well as how producing statistics becomes incentivised over time. She also notes the importance of new statistical collections going from unimaginable to routine, and consequently expected, desired and demanded by both the media and public (Gorur, Citation2018). This was clearly seen in the backlash to the announcement of state premiers that the previously daily routine press conference announcements of COVID-19 case statistics would drop in frequency – ‘widely condemned’ (Doherty, Citation2021) – and again when the actual health reporting changed, much later on (Australian Associated Press, Citation2022).

Coming out of the pandemic, ongoing government operations now rely on both new and increased frequency statistical updates. The new and immense projects associated with recording COVID-19 cases, contact-tracing, and other calculations became routine over the course of the pandemic, dramatically updating the speed and scale of existing regulations for notifiable infectious diseases. Economic measures, as well as public health ones, adapted and expanded. The consumer price index, published by the Australian Bureau of Statistics as a quarterly release since the 1960s were joined by a new interim monthly indicator from 2022 onwards. This escalated production of the index relied on technological advances over the preceding decades, but owes much to the rise in both public and political demands for data in the pandemic. Indeed, the Chief Statistician called the pandemic ‘an enormous opportunity’, for the Australian Bureau of Statistics to prove itself (Walker, Citation2023).

Some data voids were filled not by state operations but by others, either in the media, as consultants, or other independent secondary analysts. Data journalism projects from the Guardian and the ABC offered clearer visualisations and updates of health and economic data for their readers. Independent projects such as covidlive.com.au explicitly describe their rationale as filling the gap in official data publication, providing updates in-between the slower official updates (Macali, Citation2020). Another factor contributing to pressure on statistical production related to concerns about incorrect information being circulated and needing correction. There was also a global pattern of ‘armchair epidemiologists’ (Humpherson, Citation2020) – people with insufficient epidemiological expertise, and varied statistical expertise, contributing secondary analysis, estimates and predictions to the public sphere. Connected fears of misinformation and disinformation or simple lack of information motivated both state and non-state actors to increase statistical production and circulation, acting as yet another demand driver.

While the increased pressure to produce more public data may be evident at a distance, its consequential effects are less clear. The pressure is clear in the public and professional responses in the media demanding more numbers and their unhappiness when numbers were reduced, as well as obviously in the increased statistical production itself. The presumption behind these would be either that increasing production would have no negative consequence on the policy system itself, or rather that any consequence would be outweighed by the benefit of having more data available more frequently, or perhaps that the consequences would be somehow separable from the data production itself, and thus not compromising the quality of data. For example, some of the detrimental trade-offs were prominent in mainstream discourses at the time, particularly related to the more general sentiment of ‘trading liberty for safety’ being a foolish overreach by Australia (see e.g. Friedersdorf, Citation2021). While this consequence is a critical facet of increased data production, it is one which does not consider the reflexive consequences of mass statistical production and communication.

It is therefore important to note that this pressure is not one solely on production, but on communication. As seen in the backlash to reduction of press conferences, and to batched case reporting going from daily to weekly, expectations were tied as much to communication and public access of data as it was to production and application to policy. Challenges in data access and communication were noted by journalists and other commentators at the time, pointing towards issues with increased errors and missing data due to infrastructure issues, and towards related issues arising from limited resourcing for fact-checking or validating work. However, at a distance, these are only hypotheses.

At a societal level, research from the sociology of quantification describes the reactive and reflexive effects of increased use of statistics in public debate; however, there is less known about how these effects are constructed at the mundane professional level. Numbers, particularly consistently published rankings or rates over time, tend to dominate discourses and demand attention, even when contested by publics (Espeland & Sauder, Citation2007), often growing to be demanded by the same groups as their availability becomes routine (Gorur, Citation2018). Numbers initially used to describe are easily translatable into tools for surveillance and control, even involuntarily as populations react to published measures (Espeland & Stevens, Citation2008, p. 414). Alongside these behavioural effects, research has shown the effects of reflexivity in the exacerbated constrictions of ongoing statistical production. Of particular concern is data inertia (Merry, Citation2016), the tendency of certain production decisions on methods, scope, and categories to persist. This is reflected in reviews of policy practice, with one review from 2004 describing how new measurements are decided by relevant ministers and what has come before (O’Dwyer, Citation2004). This is an issue of both practical resourcing, as similar collections can more cheaply be integrated and communicated, and of theoretical constraint. Increasingly, the effect of data inertia is that what counts is only what has already been counted (Merry & Wood, Citation2015).

Study rationale and methodology

The reflexive and constraining effects of increased statistical production in public policy may be commonly known, but under-described in terms of how the consequences of stretched resources constrain the work of professionals both directly and indirectly over time. The Australian pandemic response is a highly relevant case, where demand for statistics in public policy from both politicians, professionals and the public skyrocketed, and production increased to match. To investigate these effects, I spoke directly to professionals working with public data in Australian in 2020, asking them to reflect on their day-to-day processes, as well as the ecosystem they contributed to more generally. This approach was also motivated by a recent call from the field of critical data science about foregrounding the expertise of professionals in critique: Barocas and Boyd (Citation2017) comment that academics have often flattened the subjects they study, declaring professionals to be unaware of the power they wield and the damage for which they could be held responsible. They call for scholarship which acknowledges and engages with the expertise of data professionals, and the extensive work they already conduct in managing data responsibly, against the ‘frustration’ (para 3) professionals feel about this work being ignored. This investigation therefore aimed to foreground the expertise of these professionals, and their own expert sense of how their work had been constrained – or perhaps improved – by increased demands during the pandemic response. The study methodology chosen was semi-structured elite interviews, common in media and communications policy research in recognition of the often small number of relevant key actors (Herzog & Ali, Citation2015, p. 38). The set questions were open-ended to encourage interviewees to reflect, explain and justify their responses in detail (Van Audenhove & Donders, Citation2019, p. 188).

The data for this study came from a larger corpus of interviews with professionals working with data relevant to Australian public debate at a national level in 2020. This scope was considered broadly across multiple contributing sectors. While COVID-19 response was not explicit in the interview questions, many participants spoke at length on the topic voluntarily. These responses have been grouped and analysed here in the context of the demands for increased statistical production occurring both acutely within the pandemic response, and general reflection on preceding eras. Recruitment was via snowball sampling, primarily amongst professional association groups. Initial seeding was done via public Twitter calls, as well as to the Statistical Society of Australia, and directly to known data journalists and public commentators in Australia. At the conclusion of each interview, participants were asked to suggest any other professionals to be interviewed. Interviews continued until saturation of both interview content, and suggestion of potential participants. In total, 49 interviews were conducted. Multiple participants held dual roles across different sectors; however, for the purposes of study analysis, primary designations were granted for a total of 10 participants working in academia, 9 participants in the media, 9 in politics, 11 in public service, and 10 working as other consultants, secondary analysts or commentators. It is important to note that while an awareness of the different sectors of participants was kept during the coding and analysis of data, the theoretical rationale of the larger study was focused on the acts of data manipulation and translation considered together across sectors, rather than in separate comparison. This approach was validated by the number of participants with dual positions, regular consulting or past experience across multiple sectors, and the generally porous borders between each.

Data collection took place throughout 2020, online via the video conferencing software Zoom, and under approval of the Australian National University Human Ethics Protocol 2019/266. Informed written consent was obtained from each participant prior to each interview, with the level of anonymity desired by the participant discussed throughout the interview. Interview transcripts were coded and analysed in the qualitative coding software MAXQDA following the process of thematic analysis outlined by Braun and Clarke (Citation2006) and detailed for media policy research by Herzog, Handke, and Hitters (Citation2019). This analytical process was undertaken firstly in refining themes across the entire dataset, then extracting and reviewing themes explicitly and implicitly relevant to pressures of the pandemic response, and combined with desk research to contextualise reflections both in the rapidly-shifting environment of the response, and the broader public policy environment of Australia in which the participants work. Final reports of thematic analysis include selected data extracts in order to best represent particular themes, placed in an analytic narrative (Herzog et al., Citation2019, p. 396); I present this narrative below, with data extracts as interview quotes, with some further introductory context gathered across the corpus of interviews more broadly.

Pandemic pressures

The professional reflections in the interviews detail both actual and perceived aspects of the new pressures that flowed through the system in feedback loops. Authoritative public health communication that cited decisions based on data led to public demand for data access, increasing the work of medical professionals, public servants, researchers, journalists and others working overtime to refine and communicate data releases and commentary more frequently – which in turn only increased demand for more data on all sides. What is most clear is that while there has most definitely been increased demands on statistical production and communication as a result of the pandemic, these demands are multifaceted. While still noting a general sense of pressure from the public, participants also spoke not only of increased functional demands from other sectors, but also of their own professional demands of others. Most notably, one secondary analyst reflected on a wish for ‘more reliable, more timely’ statistical releases, specifically lamenting how they would ‘love to see a monthly CPI for instance, but that ain’t gonna happen anytime soon’ – a measure the Australian Bureau of Statistics (ABS) would later begin publishing from September 2022. In announcing the new release, the Australian Reserve Bank termed it ‘a welcome step towards a timelier read on inflation in Australia,’ (Citation2022).

Much of the pressure was concentrated on the ABS, partially because of their existing central role in statistical evidence in Australian public policy, but also because of their quick response in increasing production at the start of the pandemic response. A secondary analyst highlighted the ABS as having ‘really stepped up to the plate in this COVID era’, producing data sets that were ‘really useful’, such as automated weekly payroll figures, ‘things like that, that they’ve never done before’. The participant emphasised the speed of these new productions relative to before the pandemic: ‘they’ve suddenly started producing them almost within days of the impacts being felt’. In contrast, the analyst noted many other government agencies in Australia who they perceived as falling behind demand. One such example was the previously regular reporting on welfare recipient numbers:

We would normally have January, February, March, and we probably would be waiting on April’s coming out soon. And yet we’ve got nothing from December onwards, which I think’s really poor given that it’s such a vital dataset at the moment. How many people actually are claiming Job Seeker? Nobody has a clue.

Another participant, working in academic research, reflected more generally that ‘Government’s really bad at releasing administrative data in a transparent and timely way […] particularly about things that they’re a bit embarrassed about’, which they specified went beyond a wish for routine data releases, but also to one-off requests. These reflections speak to the demand in the face of data voids from authoritative sources, and the dynamics behind the rise of alternative and citizen-led data projects during the pandemic. Indeed, other institutions certainly felt the demand, or otherwise pre-empted it. Descriptions from the annual scheduling of survey design and question inclusion for a national survey instrument included how the team ‘had to rapidly adapt’ in early 2020, knowing ‘people are going to expect questions about COVID’, and being ‘swamped with suggestions that made [their] life harder’.

The increased usage and production were evident across varying topics and levels of public policy. Media interest was seen across various subject domains, with one participant reflecting on their economic modelling suddenly receiving a lot of publicity and use by different users. At the local policy level, one participant reflected specifically on city council data demands for the pandemic response, working to bring together ‘a whole range of disparate data sets to tell a local [city] story, both in response and in recovery to COVID-19’, specifically noting that retrieving this data from state-level statistical releases was not practical. The data sets drawn on for this particularly city-level response included transaction data from local banks, customer inquiries with commercial property agencies, city visitor numbers, telehealth data, and others. The participant raised the fact that this new need presented a ‘priceless opportunity to encourage the data to be centralised into one platform to tell local stories more effectively, and also to enable local research as well’. This was a common theme across the professional reflections: that the novel and sudden demands of the pandemic provided a – relatively – convenient excuse to push for new schedules and structures for data management.

Another aspect of increased workloads was in communicating uncertainty to the public. One participant described the gap in understanding, where ‘the public thinks you have a supercomputer that tells you if Fred Bloggs is going to develop COVID-19 next Tuesday, and so they don’t understand the nature of models in a theoretical way’. They also noted difficulties in the communication of epidemiological specifics, such as ‘contributory cause’ in recording deaths, which was a concern for misinformation as people misinterpreted what was standard recording practices. This level of extra cautioning and communication work around COVID-19 reporting was reflected on with different perspectives from participants regarding the sense of obligation and responsibility, or irritancy and burden, for having to do extra explanation or ‘cautioning’ work.

Participants also detail practical challenges in the Australian data pipeline during the early stages of the pandemic. Many commented that such challenges were already in place under more mundane circumstances, though exacerbated by the changes in pandemic reporting, particularly outdated standards and formatting. As stated above, there were many generalised complaints about the timeliness, transparency, and general clarity of government data releases from multiple participants, across academia, the media and secondary analysts. These issues were seen to worsen during the pandemic, where many agencies paused regular production while others increased output, leading to stopgap, informal solutions. One participant described frustration at the ways they had to source data they felt should have been publicly accessible:

To get that data, I have to like, find a friend who works for a senator and get them to ask a question in estimates to get that data out of the department. That’s crazy. Like, that’s no way to run a like, moderately efficient country.

Furthermore, these issues manifested to different extents across Australia, as state management of data operated differently. This is clear not only in the reflections in interview data of this study, but also in public commentary on social media (see Vilkins et al., Citation2020) by many data journalists working in mainstream outlets as well as contributing in volunteer capacities to fill data voids. At one level, there was ongoing discussion regarding the states’ differences in case reporting terminology, with one participant describing how, for example, ‘Queensland is counting deaths of Queenslanders regardless of where they occur. New South Wales is counting anyone who dies from Coronavirus in the state. So, obviously, those two numbers can’t go together’. But critique also touched on how the underlying data infrastructure, introduced too rapidly, was left operating far below best practice standards:

The best information is still being put out in a single PDF to a mailing list of Canberra journalists from the Department of Prime Minister and Cabinet. And this is a ridiculous situation actually … we were just doing it by hand. But then I made a PDF scraper so I can actually look at local case transmission over time and all this other stuff, but it’s insane that we can’t just go to a website and download that.

In parallel, those on the production side reflected on the requirement to stretch themselves into assisting with communication efforts due to increased activity. One participant explained how their role previously had kept them far from the media or institutional communications, ‘very internally focused’, preferring their only audience to be peers regarding technical work. However, they stated that more recently, they’d been ‘a little closer’ to communications work, especially in drafting media releases, because of the speed of recent work: ‘Some of the details are fiddly. So we want to get some information out there about the projects we were working on so we don’t surprise people after the fact’. These vignettes describe multiple roles across sectors in Australia where professionals were either tasked with, or reacted in response to, the need to cross usual expertise lines and contribute new efforts to produce and communicate statistics in response to the crisis.

Preceding patterns

The interviews detail not only the pressure directly resulting from the pandemic, but also the pre-existing trends of constrained resources and demands that have been affecting data in Australian public policy for decades. Many of these are recognisably the same root drivers and dynamics that were only exacerbated during the pandemic, such as increased capacity and therefore demand for statistical power in data-heavy policy areas, paralleled with relatively poor statistical literacy of non-experts for consuming this information. Other long-term trends were described as slow erosions or constraints of best practice over the course of careers, such as focus switching from report-length outputs to social media highlights, and related emphasis on newsworthiness.

Multifaceted demands across the ecosystem – as detailed above – were present more generally. Secondary analysts described wishlists of more frequent and accessible releases of official data, particularly for economic data and regional numbers. In other areas such as public health, participants explained how the sector was always ‘really reliant on statistics and data […] particularly in the sort of sphere of regulations and policies’, in describing why intervention is needed and where. ‘Unless we have that data, it’s really hard to actually make that argument and make that story’. The same participant also noted, in response to COVID-19, that there was an ‘interesting issue’ with highly prevalent, chronic diseases with high death tolls having received nowhere near the attention – either media coverage or research and intervention funding – of COVID-19, even though the data ‘are really clear about chronic disease being there and being costly, deadly’.

Indeed, the rising demand for newsworthiness, and increasingly communicated via data, was discussed by multiple participants. One participant described the traditional journalistic newsworthy criteria in the context of releases of unemployment data, as i) immediacy, explaining how each new release of unemployment figures has a renewed sense of newsworthiness, ii) impact: ‘Everybody has a relationship with work. So its impact is very broad’, and iii) ‘If it says something new, which we didn’t expect’. The participant further emphasised the use of numbers to create controversy, particularly around governments, as being ‘particularly newsworthy’, ‘because the media operates on frame of crisis conflict and change’. Many participants echoed this sentiment, reflecting either matter-of-factly on creating or seeking newsworthy numbers to court attention, or lamenting how important and consistent reporting is often overlooked or dismissed in favour of new and controversial numbers.

The increased reliance on newsworthiness in reporting statistics, and their usage in politics, is tied to the recognised impact of social media on communication styles. One participant reflected specifically on rapidly learning the restrictions on communicating error in their work in government, saying, ‘[My boss] said, first of all, no one in government wants anything called “error” on their work like that’, adding that they rarely get to include error or uncertainty around published measurements despite trying, which they felt was ‘kind of unnerving’. Another participant, who had worked routinely between academia and public service in their career, described a decade of increasing restraint placed on the output of commissioned reporting:

A decade ago, major reports would have an abstract, pretty basic [for academia], then we got a directive from the [then-Minister], that that was too long for him to read. So down to dot points, and we were allowed, I think, five or six dot points, no character limit at that stage. And then the following minister found that rather onerous as well. So then we had two-sentence summaries, that had to represent everything in the report. And of course, there’s no way you can do that, it’s just not going to happen.

This trend transforms into a contemporary emphasis on social media reporting, with the participant continuing: ‘These days, we get asked for four or five points that are tweetable. So we have character limits on what we’re able to summarise an entire report into’, highlighted as particularly problematic given the nuanced and sensitive nature of the reports. One major tension as consequence of such a reduced footprint to summarise research was the choice of ‘do we report the good stuff, or do we report the bad stuff?’, and fears of over-simplifying complex issues. This is a longer-term account of the same effects of the public service statisticians working across expertise lines in communication where time was of the essence: here, scientists felt they were contributing outside of their expertise, forced to make decisions about policy relevance with limited resources.

One of the major causes identified by participants was political intervention, either directly by politicians or pre-emptively by public servants and institutions. Delayed government sign-off on statistical releases was one noted facet of this, with one participant describing a certain data set that was previously on ‘a nice regular timeline’, which the participant then had to follow up with over several months. Finally:

They said, we’ve done it, we’ve finished it, but it’s with the Minister’s department, and he’s decided he doesn’t want to release it just yet, because there’s something else going on, or they want to make an announcement about something. So they’re kind of hijacking data to suit political requirements or to tie in with whatever their particular version of political aims is. And I think that’s disappointing as well, when it’s supposedly publicly available, publicly funded data.

Many participants also reflected on limited capacity and skill of the side of data management and reporting by the government and the media. One participant summarised that though reporting on statistics was improving in their perspective, ‘Still, you see a lot of really numerically illiterate statistics being published’, referencing, for example, pie charts totalling 125% in reporting public opinion polling – which may be motivated more by political framing that statistical illiteracy.

Consequences of increased pressure

Also evident in the interviews are the consequences of the acute and long-term pressures on public data. Such acceleration in production and communication necessitates less time and expertise available for interpretation as data releases are left to ‘speak for themselves’ and assumed to point objectively to particular policies. Alternatively, failure to accelerate can result in real or perceived gaps in knowledge, which can discourage trust in state data production.

One participant described the sudden ‘rapid response’ needed to meet the demand for insights into the early pandemic response, noting the questions for a major survey instrument could not be tested and fixed as they would have under usual circumstances: ‘No testing, and that’s going in this year’. This account points to many consequential effects of not only the reduced time and resources of the pandemic pressure, but decreased funding and resourcing at any time, as complex survey instruments and statistical operations may be increasing left to operate without standard levels of care and quality control.

In parallel to production, the interpretation of data may also suffer, as professionals and non-professionals alike struggle to grant the time and expertise for placing data in its proper context. One manifestation of this is professionals being required to work outside their usual role, making decisions they feel unequipped to make regarding communication or framing on an issue; another is a response of audiences ‘switching off’ to bombardment with statistics, particularly similar ones over and over. A related element is the raised level of suspicion of illegitimate information in such a rapid response environment. One participant working in public health explained:

COVID has actually made me become really suspicious and really critical of data that’s out there and being able to dig down beneath the headline, just because I think we’re so swamped by all these different numbers, about coronavirus, that it’s quite overwhelming.

The participant also noted that where previously institutional prestige and trust worked as shorthand for reliability of data, some notable organisations and mastheads gave them pause due to incidents with accuracy during the pandemic, likely attributable to lack of appropriate oversight or checks on new information. However, ‘editorial issues’ were also highlighted, even from major peer-reviewed journals.

The fixated drive to produce more and more of certain statistical outputs can – perhaps ironically – reduce the ability to produce other outputs, creating unbalanced coverage in data, and therefore potentially an imbalance in policy attention. Many of the participants working as secondary analysts or consultants emphasised this point, with one participant noting that many datasets their team relied on frequently were not up-to-date in comparison to equivalent data in capital cities, necessitating predictive calculations which contained more potential for error. Other analysts described their work in stitching together ‘basically any data source [they] can find’ for comparative work of sufficient insight and statistical power for policy relevance, particularly for matters affecting regional, rural and remote areas, or particular subject matters outside of major policy and funding focus. One particularly complex focus was data on First Nations people in Australia, as efforts on Closing the Gap goals hold such policy attention that they necessitate committed data gathering, but leaves notable gaps in data on surrounding issues. Participants described sequences of events across multiple topics, reflective of the data inertia effect, where highlighted areas of action resulted in funnels of diverse data into standardised statistical reports, with qualitative, thick description falling out of consideration, and thus weakening the potential for alternative perspectives to change the course of policies.

Australian numbers

A final finding of note is the trust, pride and support across all participants for the combined work of Australia’s public numbers, primarily – but not exclusively – centred around the Australian Bureau of Statistics. Though many participants found strong words to criticise the frequency and accessibility of releases, they often framed this not as a primary issue but as consequence of needless political interference. Almost all participants, when asked specifically about the landscape of Australian public policy data, took time to note the quality and reliability of Australian institutions, and the trusted professional and personal networks that underlie them. Many participants explicitly tied this to funding levels, noting recent funding cuts and emphasising the requirement of committed resources for reliable, high-quality data:

We are really lucky in terms of having really strong public government agencies and departments who do a huge amount of work in terms of collecting and collating statistics and data. […] And I think that having really well-funded government departments and training programs is a really good way to ensure that we’re collecting really robust high-quality statistics and data that we can then rely on in emergencies like COVID.

Conclusions

The reflections detailed here showcase the multifaceted and interconnected pressures on public data during the pandemic, not only in production and communication, but in-between and surrounding both. Common themes from 49 interviews with professionals across the Australian public service, academia, media, politics and others all contribute to a picture of a public data ecosystem that is capable of greater speed, scale and output, but with what feels like limited resourcing to meet increased demand. Taking stock of the observations of professionals themselves is critical after the initial pandemic response, as laborious, stopgap solutions – intended to be temporary in a crisis – seem to have settled in to stay. The constraint of time and resources due to increased pressure may be the common story across the pandemic, but these descriptions illustrate how preceding years of constraints on publishing outputs, centring of social media logics, and erosion of public resourcing set the scene for more acute effects. The sudden controversial and uncertain conditions of the pandemic and the public expectation for data-driven management – especially high-throughput, accessible, detailed and up-to-date data – allowed for rapid responses to emphasise up-scaling and increased production output over resourcing for interpretative or ethical considerations, without much notion of walking back these new thresholds of work. This privileging of statistical evidence in terms of both production resources and in the rewards of media coverage and policy attention also served to not only further encourage more of the same, but the diminishment over time of other forms of evidence, or aspects of issues which were less easily or frequently enumerated. Solutions can be found within the insights of these same professionals, who offer straightforward calls for more, and more varied, resourcing for public institutions, who they in turn ask for more reliable and accessible data across topics, geographies and demographics that are often left out of policy conversations precisely because they lack relevant data. Near unanimously across the reflections, professionals made clear their respect and gratitude for each other, and for the common resource the country has constructed and maintained in its public statistical institutions. After over a decade of maintaining performance under rising pressures, amidst the acute new strain of the pandemic environment, they make a clear call to re-invest and properly support the production and management of public data, so we can continue to have faith in Australian numbers.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was funded by an Australian Government Research Training Programme scholarship as part of a Doctor of Philosophy degree at the Australian National University.

References