1,699
Views
1
CrossRef citations to date
0
Altmetric
Articles

The societal impact puzzle: a snapshot of a changing landscape across education and research

ORCID Icon & ORCID Icon
Pages 323-340 | Received 13 Nov 2020, Accepted 16 Jun 2021, Published online: 09 Jul 2021

Abstract

The term ‘impact’ is everywhere. Organizations and individuals want to fund projects for impact, measure impact, and showcase the impact of effort, expertise and financial investment, but clear definitions and understandings of what having an impact really means for people and institutions appear lacking or ad-hoc. This paper explores ‘impact’ in the areas of education and research into government practice. For governments, the impact agenda involves operating in increasingly tight fiscal environments with mounting pressure to articulate and demonstrate return on investment. For education providers, there are increasing calls to justify and prove why investment in education is an efficient and effective endeavor. For universities, this includes a shift from a traditional publication-focused research impact culture to a wider societal impact one that demonstrates direct and indirect benefits to society. This paper conceptualizes impact as a “puzzle” with many pieces, with education and research making up key pieces that can and need to fit together better. In doing so, the paper identifies four problem areas to help guide thinking toward clarity about what ‘impact’ entails. To aid collective progress in this space, we detail key issues facing the education and research sectors. Based on our analysis we arrive at a set of questions intended to help guide thinking and actions toward collectively increasing the ability to generate and demonstrate the impact of both into government practice and society at large.

1. Introduction

What works, why and how best to achieve “real impact”Footnote1 remains a perennial question, one that can be answered in an ever-expanding number of ways depending on the context. Never before have governments and the education sector felt so much pressure to demonstrate impact and “return on investment,” whether that be financial, expertise, time, and effort among other inputs. There is a convergence of sustained interest around the impact of education and research for governments and universities, and the need to better understand the value this has for public policymaking, government, public service, and the communities they serve. But in practice, there often appears to be a lack of coherent or consistent understandings, coupled with sectors operating in silos and talking among themselves, across, or at, rather than with different sectors. Rather than seeing different definitions and approaches as purely competing, we argue that more nuanced understandings of the different drivers of what “counts” as impact within each is needed. Each sector forms an important piece of an overarching “societal impact puzzle” and it is important to understand what the benefits are for society at large. We argue that more conceptual and practical work is needed to better connect and incentivize the tracking of impact across different sectors to be able to tell a richer and longer-term story about what the impact of various activities on government is, and in turn, what the impact of government on society is. Doing so requires bringing together experts and practitioners from across sectors such as universities, governments, public management fields, and civil society among others, to garner their respective understandings, strategies, and frameworks for impact. The need for this type of collective endeavor comes at a time when there is increasing demand for connected and concerted efforts across multiple disciplines and sectors to advance thinking, develop new innovative methodologies, and pave a way to systematically begin to demonstrate various forms of “impact,” and to understand what works and why across sectors, dimensions, and time.

But how best to go about this in a coherent and concerted way across all sectors remains unclear. Internationally, we are witnessing an increasing desire and pressure to demonstrate the impact of theory-informed education and research impact into policy and practice (such as the rise of annual “impact reporting,” see also Zardo Citation2017; Gunn Citation2019; Grant Citation2019; Dennett Citation2019). This push is playing out in different ways. For education providers, especially that outside of the university sector, this means increasing calls to justify and prove why investment in education is an efficient and effective endeavor. For universities, this includes a shift from a traditional publication-focused impact culture to a societal impact one, and the rise of engagement and impact frameworks including the Engagement and Impact Assessment (EIA Citation2020) in Australia and the Research Excellence Framework (REF, Citationn.d.) in the UK. This picks up on debates about what the value of universities is and what they are for (for both teaching and research). This also brings into focus the tensions between what governments may see the value of universities to be, compared to that of what the community and broader society may see. These debates and tensions have arguably contributed to the inability to clearly articulate the value of universities is regarding both education and research, and arguably led to a decline in government investment over time, a fixation on value tied to individuals, and declining public trust in universities. For governments, this involves operating in increasingly tight fiscal environments with mounting pressure to articulate and demonstrate return on investment for external research and learning activities.

Whilst universities have begun to take steps, primarily in regard to research impact, a focus on education and policy impact into government and public sector practice appears to be lacking. To help better understand a path forward, this paper begins by outlining what we mean by the “societal impact puzzle,” followed by outlining four problem areas that highlight how we think and what we should do about impact. We then turn our attention to the education and research pieces of the societal impact puzzle and arrive at ten questions to help guide thinking and actions moving forward. From the outset, we note the inter-related nature of education and research as puzzle pieces and acknowledge that our analysis is not an exhaustive exploration, nor is our purpose to offer concrete solutions, rather we offer a set of sense-making questions as a basis for further discussion and reflection about how we can advance our collective thinking and practice.

2. The societal impact puzzle

There are a number of ways of thinking about “impact” with different understandings within and across sectors. The following section outlines four key problem areas requiring further discussion which relate to a lack of consensus about what “impact” means; what the rationale for measuring “impact” is; deeper methodological work for measuring impact; and what the quantum of attention should be; and where responsibility should fall for tracking impact.

The first problem area relates to a lack of consensus about what we mean by “impact” along with clear levels and degree of reach, whether that be individual, organizational, sectoral or society among others. In Australia, the Australian Research Council defines “impact” as “the contribution that research makes to the economy, society and environment and culture beyond the contribution to academic research” (ARC Citation2017). But there are a number of terms that are often used interchangeably such as, “success,” “assessment,” “outcomes,” “evaluation,” and “effectiveness” among others, often with definitions omitted. As a first step, conceptual and practical work needs to be done to reach a consensus about what we mean when we talk about “impact” on government and public sector practice. For public services, what do we want to be able to see and capture? For education, is its the application of concepts and ideas demonstrating positive shifts in future outcomes, improved management performance or evidence of innovation? Is it promotions, mobility data, improved earnings, increased capability, etc.? For research, is it a linear line of sight to uptake in policy, or new ways of thinking about issues, the ability to assess problems and suggest solutions, or better understand the complexities and possible pathways forward? In the university sector, the impact is often perceived through performance indicators such as quality and/or quantity of publication output, teaching evaluations, academic service, grant funding, citation and journal metrics (H-index, A-star, etc.), but how to balance these and with what relative weighting is contested. From this brief overview, it’s clear that the idea of impact has achieved traction, largely due to government policy drivers, but the challenge of how to meaningfully incorporate it into systems and organizations, especially those with contrary or competing goals remains unclear.

The second problem area relates to the need for clear rationales about what the purpose of measuring impact is. For example, is it to justify spending or resources alone? To track and document continuous improvement and learning? To understand whether something works, or why something works, or both? Is it to monitor, ensure compliance, motivate or incentivize? Careful consideration and refection from the outset are needed to fully understand what the purpose is and what the findings will be used for. This is crucial because it informs the “rules of the game” which, for those that wish to succeed, will endeavor to learn how to play for better or for worse. Related to this is a question about what we can reasonably expect “impact” to be? For instance, is there a particular point or threshold at which something is deemed to be impactful or not, on what grounds, and who is best placed to make this judgment? Martin Brookes (Citation2019) has argued that “impact” isn’t simply about effectiveness, rather about a mindset and a journey of continuously improving. In this sense, “impact reporting” should be about “improving first, not proving” and demonstrating that the best efforts have been made to achieve the most (Brookes Citation2019). With this in mind, what are our methods for capturing and documenting improvement and expected reach? And what is the sphere of control or influence we can reasonably expect from any single activity?

The third problem area relates to the need for deeper methodological work about how to measure impact. For example, innovative and longitudinal ways of capturing and documenting various forms of “impact” that are fit for purpose are needed that go beyond individualistic practices such as static participant feedback. There are a number of complexities and tensions at play when it comes to demonstrating impact. This includes (although not limited to) how methods can be attuned to multiple dimensions and factors simultaneously? Dimensions such as scope, depth, duration, timing, context, reach, and level of analysis (individual, family, team, system-wide, organization, community, etc.) need to be considered. How can our methods for knowing account for the relationship between the depth of research or the length or quality of education activities, differing expectations and outcomes over varying time scales (short, medium, long-term), and multiplier effects? What are the time frames for which we can reasonably expect particular outcomes, and how can we account for emerging and flow-on impacts that may take time to emerge and wash through? There is also a multiplicity of factors at play such as age, sector, experience, gender, race, sexuality, ability, culture, career path, organizational culture, motivations, values, autonomy, and ways of learning among others. These may confound and complexify causal logic, making it difficult to establish the contribution of any particular dimension or factor to the impact story. Then there is the question of how best to balance qualitative and quantitative approaches in ways that can counterbalance explicit and implicit bias, such as gender, race, age and cultural bias (among others) in evaluations (see Fan et al. Citation2019). What are the benefits and risks of methods that look for tangible outcomes? An increased focus on tangible “impact” alone, can arguably only lead to a limited and false sense of genuine “impact” and risk-producing and re-producing only what can be counted, seen, or methodologically captured. Simultaneously, to what extent is further work needed to come to terms with and justify the intangibility of “impact”?

The fourth problem area relates to the need for further discussion and reflection about the appropriate quantum of attention that “impact” tracking activities should attract, and where such responsibility should fall. With increasingly tight fiscal arrangements, how much time and resources should be dedicated to tracking? Can it or should it be determined on a ratio or proportion of the overall spend or duration? And where should this responsibility fall? Should it be on the provider or commissioning party? Or on the particular individual or organization which stands to benefit from the activity? How can this be achieved when impact and measurement aren’t or may not be considered to be core business? Following this, how big of a piece of the journey and impact story can any one party be reasonably expected to tell?

With these four problem areas in mind, we now turn to a brief overview of some of the impact of the key way is understood and acted upon in the realm of education, followed by research, before offering a set of sensemaking questions based on our analysis to help advance collective thinking and action across sectors.

3. The education puzzle piece

When it comes to the education piece of the societal impact puzzle, we know that organizations and individuals invest in education as a way of increasing knowledge, skills, productivity, capability, opportunity, and competitiveness. It is universally accepted that education makes a positive difference in society, both at an individual and at the population level. Minimum standards in education such as the ability to communicate (literacy) and calculate (numeracy and judgment) promote human flourishing and good health, enabling a sense of identity and well-being. It is clear from government policy that education is a way of providing and enabling public value. In line with arguments made by Mazzucato (Citation2018), more work is needed to better track and articulate the ways in which public investment in education results in better public wealth and broader co-created public value. Arguably, a decline in public investment in education coupled with a decline in public trust and negative narratives about universities may stem from the inability of universities and their leaders to articulate a clear message about the positive impact of education and research on societal wellbeing. Doing so requires longitudinal focus and simultaneous attention to breadth, depth and temporal dimensions of outcomes to both track and articulate impact.

When it comes to measuring the impact of education, it is fair to say that the dominant ways of capturing and demonstrating in the field of public administration and management are undeveloped compared to other fields (e.g. science, medicine). Although new reports and guidelines are in places such as Innovation Research Universities (IRU) and Tertiary Education Quality and Standards Agency (TEQSA) in Australia and the EIA, REF, and Teaching Excellence Framework (TEF) in the UK (see Bhardwa Citation2019), there are no national or international measures that can link specific educational activities (that occurred in the past) with direct outcomes (in the present and future). Moves such as the UK’s TEF attempt to track teaching quality, learning environment, student outcomes and learning gains (Gunn Citation2019), but they have limits. For instance, education “outcomes” in the TEF framework are partially assessed on employment destination and graduate earnings which is contested, exemplifying how there isn’t a simple “silver bullet” metric (see Gunn Citation2019). Accredited professional programs in other disciplines (such as law, librarianship, medicine, etc.) ensure regular, in-depth assessments of teaching quality and curricular content; however, these programs rely on close links to professional bodies and are limited to particular sectors.

As education levels increase, the debates over its functionality and purpose expand. Baseline education (the definition of which remains open, but for our purposes here we broadly view it as being at least the attainment of primary and secondary level education) is seen as indisputably beneficial as a human endeavor. Many, however, begin to question the value of education beyond a certain saturation point, interrogating whether: (i) its marginal benefits outweigh its costs; (ii) it contributes to societal cohesion or improvement; and (iii) it promotes the interests of certain parties over others (see Blagg and Blom Citation2018). For example, disputes continue regarding whether education is at the service of capital, whether it should be deemed as providing predominantly private or social returns, and the prioritization of resourcing between different fields, such as sciences versus arts and humanities (see Strauss Citation2017).

The complexity and tensions surrounding how best to calculate educational impact are well documented, most of which centers on university education, with less on other educational activities such as executive education, professional development, and training. In part, this may be because universities are better positioned to work collectively as a sector, compared to that of more fragmented private and third sector education and training providers. There’s also a lack of clarity about what the differences between education and training are, the degree of accreditation, and expected value and outcomes of each (see Brungardt Citation1996). Generally speaking, education is perceived to be broader in nature, compared to that of training, which tends to be instructive, technical, narrower in focus, shorter in duration, and tends to lend itself more to binary testing and explicit measurement. Acknowledging the value and breadth of training, our focus here is on education with its associated depth and temporal dimensions.

There are two ways we’d like to propose for thinking more deeply about the impact of education for the public sector. The first is education and development activities that individuals receive before they enter the public service. This is an area that universities, technical and further education (such as TAFE), and secondary education providers (among others) have a direct role in developing the skills and capabilities of people who work in government and the public sector. Yet this appears to be an under-explored area in regard to tracking “impact” of what is taught, who teaches it, how, and who ends up going into a public service career and why (among other key questions). We know that universities communicate with employers and alumni, and are increasingly focusing on live policy debates and work-integrated learning opportunities, occurring in tandem with reviewing the latest ideas and scholarship to develop innovative curricula. It is not always clear, however, how public services view or directly feed into what the expectations are for the types of graduates that emerge from the higher education sector and their job-readiness. When looking globally, national education policies and incentives, employability drivers, and culture play a role, not only with respect to the education side of the relationship but also with public service recruitment demands. For example, countries that historically pursued a generalist or Oxbridge approach (Oxford and Cambridge dominant approaches) to public service recruitment such as the UK and Australia differ in their expectations for specific public administration or political science qualifications compared to countries where public service recruitment is deliberately tied to specific technical skills of public administration, policymaking or political science credentials, or where holding such a qualification becomes a de facto entry ticket into the public service.

The second is education and development activities that individuals receive whilst employed in government and the public service. This segment of education activities is distinct to that of the university sector for a number of reasons. In many respects, the bar for “impact” and return on investment exceeds that of the university sector. This is primarily because many activities are fully or partiallyFootnote2 taxpayer-funded, and thus decisions are made in an environment of finite financial resources to deliver services and outcomes for citizens. Thus, there are opportunity costs to funding educational activities that directly equate to not funding other activities. The potential and scope for public sector education and development activities to create better public value become a key criterion (Moore Citation1995; Alford and O’Flynn Citation2009). Thus, the stakes are particularly high for both educational providers and public services to capture and demonstrate the impact of learning activities for creating better public value for the communities they serve. More work is needed to better understand the impact of these activities within the public service, but also beyond it, especially regarding those who move between public and private sectors, as well as wider life outcomes and societal impacts.

When it comes to education for public sector leaders, some have argued that such programs should be considered a public service (see Hiedemann, Nasi, and Saporito Citation2017). Education activities for senior public servants add a unique layer of dimensions and factors to consider. Research is increasingly revealing the positive impacts that formal executive education can have on leadership capacity and organizational outcomes (see King and Nesbit Citation2015; Avolio et al. Citation2009, 783; van der Meer and Marks Citation2018; Lacerenza et al. Citation2017, 1686; Broussine and Ahmad, Citation2013). The rationale for wanting to better understand outcomes often revolves around return on investment (ROI) considerations. However, some have argued this is a “bad” line of questioning, one that becomes about defending investment, rather than considering it (see Tuff and Goldbach Citation2018; Dennett Citation2019). That is a narrow and retrospective approach to impact which focuses primarily on “proving” impact depending on what the nominated criterion is, misses the opportunity to holistically consider impact at the outset and a deeper level throughout the entire activity to continually improve and learn.

From the education provider side, what is taught, how, and why are key considerations. What is taught includes (although not limited to) are different ways of thinking and skills such as holistic design principles, self-awareness, ways of achieving cross-jurisdictional and sector collaboration, systems thinking, ethical leadership skills, critical thinking, cultural competency, relationality, enabling mindset, reflective capabilities, individual and collective leadership abilities, emotional intelligence, and a deep understanding of public value (see Brown et al. Citation2005; Althaus Citation2016; Nesbit Citation2012). How programs are taught (pedagogy and adragogy) need to be designed in ways that are fit for purpose, place, and practically applied which includes (although not limited to) digital, immersive learning, blended learning, peer coaching, high impact virtual experiences, role-modeling, work-based projects, self-directed learning, peer-assessment, real-time life experiences, reflective activities, face-to-face, cohort building activities, cross-jurisdiction exchange, and mentoring (see Alford and Brock Citation2014; Althaus Citation2016). Choices about content and approach are informed by learning objectives, program logic and a coherent theory of change. Theory of change refers to “predictive assumption[s] about the relationship between desired changes and the actions that may produce those changes” (Connolly and Seymour Citation2015, 1; Fullan Citation2006). Theories of change are often unstated and thus there is a need for more explicit and consistent focus on what they entail, especially in regard to interrogating assumptions about what it takes to create change and why particular design principles are used and for which purposes. Without this, evaluative insights can prove insufficient, ad-hoc, restrictive, costly and narrow unless we conceptualize evaluations as an opportunity for learning where results are situated within a broader learning agenda designed to articulate and clarify a theory of change (see Bowers and Testa Citation2019, 534).

From the government and public sector side, how employees learn, what is relevant, and what they can do with those learnings are key considerations. How public servants learn best remains an under-explored area. For instance, the extent to which education impact is affected by different student typologies (motivation, role, attitude, career stage, personal characteristics), time and resource factors and mindset (growth mindset, intellectual humility, proactive learner) remains unknown. Whilst there is some work on learning theories for the university sector, the extent to which these are fit for purpose for the public sector remains unclear. For example, there are numerous learning and action theories informed by different traditions such as psychology with the theory of action systems (Reed Citation1982), social and observational learning (Bundura Citation1986), neuroscience with universal design for learning (Rose and Meyer Citation2006), experimental learning theory (Kolb, Boyatzis, and Mainemelis Citation2001), transformational learning theory (Mezirow and Taylor Citation2009), nudge theory (Thaler and Sunstein Citation2008, UK Nudge Unit), and non-formal and informal learning (Singh Citation2015) among others. Flowing from this is a question about who is best placed to deliver public sector educational activities - universities, private providers (e.g. education-based, management consultants, professional bodies such as the Australian Institute of Company Directors), nonprofit sector, public sector (e.g. through leadership academies), peers (e.g. coaching and mentoring circles), professional bodies (e.g. Institute of Public Administration Australia, Institute of Public Administration Canada, etc.), and which provider type has the highest potential for positive outcomes.

Flowing on from how public servants best to learn is a question about what they are then able to do with their new learnings. For some of the reasons outlined above, what scope public servants have to do with new knowledge and experiences from educational activities is incredibly difficult to disentangle and document. Decades of learning transfer research reveals a wealth of information regarding determinants which influence transfer, some of which include (although not limited to) learner characteristics, intervention design, delivery approaches, organizational culture, intrinsic motivations, competencies, and risk appetite, among other influences (see Bolden and Gosling Citation2006; Burke and Hutchins Citation2007; Tonhäuser and Büker Citation2016; Johnson et al. Citation2018). Less is known about what the “transfer gap” entails, that is, why new learnings might not be able to be enacted when one returns to the workplace. Emerging research indicates the importance of learners engaging in informal learning activities post- formal training, such as feedback-seeking and reflection to assist and enhance the transfer of learning (Sparr et al. Citation2017).

There are also differing perspectives about what the most appropriate learning environment model is for embedding learning. For instance, the 70:20:10 framework has attracted a lot of attention recently, despite a relatively weak evidence base (Johnson et al. Citation2018). It stipulates that 10% of workplace learning should be formal training, 20% learned directly from others, and 70% through workplace experience. However, as the evidence base for this model is low, more work is needed to collect and analyze evidence about its effectiveness, and if consensus is reached that it’s not, then what model should replace it?

Another complexity lies in the parameters of impact and available qualitative and quantitative methods for tracking and analyzing it. First in regard to parameters, initially, there needs to be a clear set of parameters about what is considered to be impact and transparency about what different types and levels of impact are prioritized or ranked. In relation to tracking and analysis, there are ways of tracking that could arguably be tailored and consistently adopted within and across sectors such as “survival analysis” and “destination surveys,” among others (see Schifter Citation2016; Lamb et al. Citation1998). Some key methods include Brinkerhoff’s (Citation2003) “success case method” which seeks to understand the factors that lead to extreme success or failure, rather than trying to quantify the numerical rate of success. Kirkpatrick’s (Citation2007) four-level training evaluation model focuses on reaction, learning, behavior and results in order to better understand the extent to which learning is transferred and enacted (see also Kirkpatrick and Kirkpatrick Citation2012). Given the limitations of any single approach, there is now a move toward a “basket of indicators” approach that draws on aspects of both and combines qualitative and quantitative approaches (forthcoming ANZSOG).

Whilst new and innovative methods are attractive, close attention to the assumptions that underpin understandings and processes for capturing “impact” is needed, beyond the static and toward the longitudinal. For instance, many methods rely on the ability for people to be able to identify, express and reflect on “impact,” notwithstanding implicit and explicit bias and memory recall issues. Caution is warranted about the pitfalls of performance management mindsets that focus only on what can be “counted,” with perverse effects such as maximizing outputs, regardless of whether maximizing outputs is the most appropriate approach for desired social outcomes (Bohte and Meier Citation2000). From the outset, more work is needed to establish clear learning objectives and goals about what we can reasonably expect education activities to achieve which includes all parties involved and pieces of the societal impact puzzle.

4. The research puzzle piece

When it comes to research pieces of the societal impact puzzle, social and behavioral scientists have argued for at least forty years that their disciplines need to do more to help solve real-world practical problems, but doing this, and demonstrating it has proved difficult (Western Citation2019a). For the social sciences, “impact” often involves informing government policy and practice. But the assumption of a linear path toward impact, in which university researchers do the work and end-users then adopt and apply what is useful, is not only potentially hazardous, but in many cases, is often a path that does not lead very far (Western Citation2019b). At an institutional level, universities are facing increasing scrutiny and global competition, primarily based on rankings and research excellence assessments to demonstrate quality and reach (e.g. QS World University Rankings; Times Higher Education World University Rankings; Academic Ranking of World Universities).

Although some University ranking schemes include reputational measures, such as employers’ views of the graduate workforce, most continue to rely on external academic journal publications, citation patterns, and other traditional metrics. Internally, the degree of change to KPIs, workload formulas, internal support and funding practices are all aspects that shape the decision making of individual researchers and educators and their ability to track their own outcomes and impact. Measures of societal impact have yet to be fully incorporated into these systems, which further disincentivises institutions to track and reward engagement and impact activities. Although societal impact activities are starting to inform promotion and tenure processes for those in continuing positions, such activities are typically classed as “academic service” in workload calculations. Service activities typically receive a much smaller performance weighting than traditional research publication and teaching activities (see Harley Citation2010; Macfarlane Citation2005). There also appears to be a skills gap with many academics reporting not being well-prepared by their institutions, or their graduate research training for ways to share their work with nonacademic audiences (see Austin Citation2002; Bentley and Kyvik Citation2011). This is also happening at a point in time when large segments of society are moving toward bite-sized attention-span learning (e.g. less academic reading overall, opinion-piece domination, etc) which is occurring in the context of information saturation often with sensationalist and infotainment preferences.

Simultaneously, government and the higher education sectors are being asked to become more accountable for the money they spend on research funding (Smith et al. Citation2013). The impact agenda represents a new phase in academic research evaluation and funding, characterized by a heightened need to demonstrate a return on public investment (Gunn and Mintrom Citation2018). Over the last decade, governments, funding bodies and universities – worldwide – have embraced a more structured approach to understanding and measuring the concept of societal research impact and the pathways needed to support innovations outside of academe. This shift from a traditional academic impact culture (e.g. that values research impact among scholars) to a societal impact culture (e.g. that values impact of research in society) is changing all aspects of university practice. For example, government funding schemes are now increasingly tailored to support applied research, universities are actively building new partnerships with external organizations, university researchers now seek more support and recognition for their engagement activities, and PhD trained academics are increasingly finding employment outside of academia.

However, global measures of success continue to focus on traditional academic measures – typically, academic publications, competitive grants, citation counts, and to a lesser extent student teaching evaluations. Although the research landscape is changing, with implications for future teaching practices in universities, the ways in which governments, universities and academic disciplines document and reward societal impact are slow to follow (Given et al. Citation2015). This comes at a time when the academic workforce is more precarious than ever (Richardson et al. Citation2019; Long Citation2018) with fewer researchers employed in full-time continuing roles. This leaves PhD-trained academics to take on very high teaching loads, with little or no time to devote to research activities, and few opportunities to engage with policy-makers and other potential beneficiaries of research outcomes.

Given the institutional structure of universities and the history of incentives that have shaped them, it has long been argued that academics and policy makers are separate communities, with a “research-policy” gap when it comes to uptake of academic work (see Nutley et al. Citation2007; Newman et al. Citation2016). While academics can do more to communicate the key messages of their research, the organizational cultures and information infrastructure of policy-related work units also play a large part in influencing the extent of research uptake in government agencies (Cherney et al. Citation2015, 169; Newman et al. Citation2016). For those within government, some key factors that limit engagement with academic work include accessibility issues, time constraints, varying skill sets, internal divisions, a lack of incentives, and limited scope to build relationships with academics, among others (Cherney et al. Citation2015; Newman et al. Citation2016). When it comes to accessibility of academic research, there are increasing calls for open access publishing practices, but these avenues are often restricted due to prohibitive costs for authors and university reward systems that typically value top-tier journals that tend to charge for access. There are a number of open clearinghouses (e.g. Analysis & Policy Observatory) and journal platforms (e.g. Open Journal Systems), but much more work is needed to make research more accessible, both physically as well as in terms of content understanding, across multiple mediums for different audiences.

For academics, there is growing scholarship and advice about how to be “entrepreneurial” and ensure research has influence. These include doing high quality research, making it relevant and readable, understanding policy processes, being accessible to policymakers, engaging routinely, flexibly, and humbly, situating yourself as either issue advocate or honest broker, building relationships (and ground rules) with policy makers, continuously reflecting about whether to engage (and if it is working), and to respond to the context and events which help create a window of opportunity (see Oliver and Cairney Citation2019; Cairney Citation2018). However, reflective scholars also recognize that few entrepreneurs succeed, and relative success results more from societal structures and the policy making environment than simply from skillful entrepreneurship (Cairney and Oliver Citation2020).

But how do we know if research is impactful on government practice? There is a growth of literature and advice about how universities can go about demonstrating this (see Upton et al. Citation2014), but methods for doing so are challenging with inherent complexities involved. At a national level, any form of public university and research assessment attracts significant controversy. For government ministers, building political consensus around research impact, both within and outside of parliament is often difficult and met with resistance (Gunn and Mintrom Citation2018). For policy designers and public managers trying to develop systems for research impact, simultaneous efforts must be made to ensure it is robust, credible, and acceptable to a substantial portion of the academic community (Gunn and Mintrom Citation2018). The way in which these assessments play out directly shape or heavily influence research focus and associated activities (especially those non-tenured) and questions remain about what the outcomes and implications are for academics who do or don’t perform to the prevailing assessment frameworks.

There are positives and negatives about the shift from a traditional impact culture to a societal impact one. Impact and engagement metrics allow universities to demonstrate and be rewarded for engaging industry, government and others in research, even if it doesn’t directly or immediately lead to impact and in-depth impact case studies allow researchers to shape and describe the important effects they achieve that traditional metrics fail to capture (Zardo Citation2017). Recent results from the Australian 2018 Engagement and Impact assessments revealed that the university sector received a “medium” to “high” grading overall, with 85% for engagement, 88% for impact, and 76% for approaches to impact (Sawczak Citation2019). But how these numbers are arrived at and what the utility they serve long term is less clear. There are also negatives and risks to relying on such assessments and frameworks. These include: adverse effects on university funding formulas, especially those based regionally and those outside of what is considered to be top-tiered (Hanmer Citation2019); an over emphasis on outputs as opposed to longer-term outcomes (Smith et al. Citation2013); ethical and political dilemmas regarding variations in power (Cairney and Oliver , 2); vulnerability of researchers when they engage in politics and policy (Cairney and Oliver , 2); and how written rules of impact often exacerbate unwritten rules of professional inequalities such as universities investing primarily in professors rather than early career researchers (see Cairney and Oliver , 2). On balance, serious questions remain about whether such frameworks and assessments help academics secure meaningful “impact” or merely help them play the game and describe enough impact activity to satisfy their employers and funders (Cairney and Oliver Citation2020).

There are arguably a number of approaches that could be integrated or adapted to help assist with capturing and demonstrating the potential impact of research into public sector practice. Some have called for more direct leveraging of research for economic development (see Jarrett and Hearn Citation2014); evaluating research for socio-economic impact, especially for publicly funded academic research with various strategies emerging about how best to do this (see Scoble et al. Citation2010); and moves toward universities becoming more “civic” and strengthening their connections to place (see UPP Foundation Citation2019). However, regarding connection to place, it is unclear how this can occur in an environment where it is arguably contrary to other research measurement strategies and by association, university drivers (e.g. internationalization, international research commitments). This is telling in and of itself about the kinds of research that are currently considered to be of impact.

When it comes to impact on policy some have called attention to the need to critique linear assumptions, preferring “evidence informed” as opposed to “evidence based” which assumes rationality and neutrality, whilst acknowledging the significant role that politics and policymaking processes play in shaping outcomes (see Bowers and Testa Citation2019, 524). Where the locus of attention should be placed is also contested. Some argue that government actors ought to want to learn about why a new policy works, as much as they want to know that the policy works (see Bowers and Testa Citation2019, 521). Some have argued that the future of evidence-informed public policy practice needs to involve three key aspects: cross-sector collaborations using the latest theory plus deep contextual knowledge; applying the latest insights in research design and statistical inference for casual questions; and a focus on assessing explanations as much as on discovering what works (see Bowers and Testa Citation2019, 521). At the same time there are questions about how universities and researchers can raise awareness and better educate policymakers, industry partners, community groups, etc., about the value of research which can positively inform practice and innovation.

5. Where to from here?

Moving forward, more work is needed to connect the different education and research pieces of the puzzle (among others) and to not only understand them as standalone puzzles but also crucially as ones that fit within a much larger societal impact puzzle. Connecting these pieces of the societal impact puzzle is needed to better document and demonstrate the public value that’s generated from collective endeavors across sectors. Simultaneously, more work and reflection is needed about what is legitimately and realistically the role of universities and researchers, and what is and can be their public value. For example, this is especially true in countries such as Australia whereby the policy focus of successive governments has been on the impact and value of education for the individual, and a questioning of research as having public value in parallel with declining public investment.

Having outlined some of the main drivers and considerations that are at play in different sectors about what impact is and what counts, we propose a number of sense-making questions below to aid discussions, reflection, and help inform understandings and parameters around what pieces of the societal impact puzzle may entail in different sectors and why. The questions below are illustrative of several areas requiring further research and applied practice. They are designed to be used as a planning and reflective tool when individuals and entities think about and develop processes and practices for demonstrating impact.

6. Conclusion

We intuitively and intrinsically know that the work across different sectors, such as education and research has an impact on government practice and society at large. The fact that investment in various forms has occurred for so long before the rise of the current impact agenda speaks to this. This pressure to do so, and to do this to the best our ability in ways that complement rather than compete across sectors, demands a collective endeavor approach, one where all the pieces of the societal impact puzzle fit together and play their part. This will enable advances in thinking, development of new innovative methodologies, and pave the way to consistently and systematically begin to demonstrate various forms of “impact,” what works and why across sectors, dimensions and time scales. Doing so will enable us to know and tell more nuanced and authentic stories about what it collectively takes to have an impact on government practice and the better public value this brings for society at large.

Acknowledgments

The authors wish to acknowledge the 2019 Impact Workshop co-partnered ANZSOG, ANU, APO and PSRG for practice and thought-provoking discussions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 ‘Impact’ is referred to in inverted commas as there is no one settled definition.

2 We note that public servants may undertake a variety of educational or training opportunities during the course of their employment in the public service, some of which they might fund themselves or which they might partially financially contribute towards. Our focus, here, is predominantly though on those activities which the employer predominantly funds.

References