155
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Developing key performance indicators for strategic environmental assessment effectiveness: a systematic framework

ORCID Icon &
Received 19 Sep 2023, Accepted 06 May 2024, Published online: 23 May 2024

ABSTRACT

Key performance indicators (KPIs) are critical quantifiable metrics used to evaluate performance or effectiveness. In the context of Strategic Environmental Assessment (SEA), they provide the framework by which the effectiveness of SEA practice can be examined and compared. A wide range of KPIs of SEA effectiveness have been developed, but to date they have not been systematically linked to the full range of dimensions of SEA effectiveness. The development of a concise framework of 10 SEA effectiveness KPIs for Ireland addresses this methodological gap. It has been developed through a bottom-up compilation of existing SEA effectiveness KPIs from the international literature, and then sieving these by: A) ensuring they tackle current practice weaknesses for SEA in Ireland; B) aligning them with strategic/tactical/operational success factors for effective SEA; C) prioritising them on the basis of feedback from SEA experts; and D) further refining them through project steering committee opinion. The proposed KPIs are unlikely to be pertinent for all jurisdictions, but the process used to develop them can be applied elsewhere.

1. Introduction

Strategic Environmental Assessment (SEA) effectiveness has been the subject of much research and debate in the last decade, partly in response to Article 11 of the European SEA Directive (CEC Citation2001) which requires Member States to report on the effectiveness of their SEA systems at seven-year intervals. Originally defined as ‘how well [SEA] works or whether it works as intended and meets the purposes for which it is designed’ (Sadler Citation1996), it has been extensively discussed in the international literature, and developed into discrete SEA effectiveness dimensions (e.g. Acharibasam and Noble Citation2014, Bond et al. Citation2018, Cashmore et al. Citation2008, Chanchitpricha and Bond Citation2013, Pope et al. Citation2018). SEA effectiveness has been argued to be highly dependent on the context (e.g. governance, available skills/capacity, power structures and openness) and procedures (e.g. compliance with legislative requirements on scoping, baseline, alternatives, mitigation, etc.). It is manifested, in particular, through its substantive and normative achievements (e.g. changes to the plan, achievement of environmental protection and promotion of sustainability) (e.g. Acharibasam and Noble Citation2014, Bina et al. Citation2011, Bond et al. Citation2022, Dalal-Clayton and Sadler Citation2017, Hanna and Noble Citation2015, Li et al. Citation2016, Ruhaar and Driesen Citation2007, Pope et al. Citation2018, Van Buuren and Nooteboom Citation2009, Wang et al. Citation2012, Zhang et al. Citation2013). Additional aspects have also been contended to play a significant role in the overall effectiveness of SEA, such the level of public engagement and input (Ruhaar and Driesen, Citation2007, Say and Herberg Citation2016), and its knowledge and learning opportunities (e.g. Ja-Thakur et al. Citation2009, Cape et al. Citation2018).

The various SEA effectiveness elements discussed in the literature were consolidated by Thérivel and González (Citation2019) into seven distinct dimensions: context; procedural; normative; pluralist (public consultation and integration of feedback into the assessment); transactive (costs and benefits); substantive (changes to the plan in response to SEA and improved planning outcomes); and knowledge and learning (improved understanding of environmental issues and SEA processes for enhancing future assessments). A number of authors have also explored the links between these dimensions (e.g. Bond et al., Citation2022, Chanchitpricha and Bond, Citation2013, Li et al. Citation2016) and attempts have been made to develop indicators or criteria for some of them (e.g. Chanchitpricha and Bond Citation2013, Eales and Sheate Citation2011, Hanna and Noble, Citation2015, Wang et al. Citation2012). Criteria are the conditions needed to comply with a desired outcome/goal, while indicators are the measurable states facilitating the assessment of whether or not a particular criterion has been met. Indicators are particularly important if we are to effectively monitor how SEA practice changes, and indeed improves, over time.

A range of both effectiveness criteria and indicators can be found in the international literature (refer to supplementary material for full detail) for each SEA effectiveness dimension. However, their development and selection has tended to be implicit (e.g. conceptual and exploratory) rather than explicit (e.g. following a systematic approach). Only four papers from an extensive literature review were found to specify a structured approach for choosing indicators of SEA effectiveness. Zhang et al. (Citation2013) used a ‘bottom up’ approach to identify four ‘critical factors’ for effective SEA: communication and understanding; resources and capacities; timing and organisation; and will and trust. They identified 201 criteria from a review of 30 articles, and sieved/condensed them into those final four categories. Dwyer et al. (Citation2014) and O’Regan et al. (Citation2016) put forward an explicit, ‘top down’ methodology for choosing SEA effectiveness indicators for Ireland, based on interviews with 20 SEA practitioners. They categorise the resulting indicators under three key headings: environmental report and assessment process; integration of environmental considerations into planning; and sustainable development and environmental protection. These headings can be partially aligned with that of Acharibasam and Noble (Citation2014) who argue that:

‘At the most basic level, the effectiveness of SEA is a function of its inputs (i.e. institutional requirements or provisions for SEA), process (i.e. assessment methodology), outputs (i.e. results and changes in plans/programmes/policies) and outcomes (i.e. longer-term change as a result of the SEA experience). Understanding the effectiveness of SEA inputs and processes is important, but the outputs and outcomes of SEA are the ultimate measures of its value added’. (p. 178)

It is noteworthy that the outcomes of these attempts to systematically develop an effectiveness indicators framework generally stop at the indicator characterisation stage (e.g. critical factors) or at the formulation of an effectiveness question (e.g. were changes made to the plan as a result of SEA recommendations?). The international literature to date does not include a comprehensive set of SEA key indicators that span the SEA effectiveness dimensions. Moreover, the majority of indicators identified in this research (refer to supplementary material) fail to specify unambiguous measurement metrics, or indeed desirable targets.

In Ireland, the Environmental Protection Agency has carried out two SEA reviews based on differing effectiveness criteria, with no clear reporting of how these criteria were defined. These effectiveness reviews reflect a gradual change from a procedural/output focus to a more normative/outcome-oriented focus. The first review (EPA Citation2012) used 16 criteria, 13 of which were procedural, with one each for context (SEA governance), pluralist (consultation), and substantive (influence of SEA) effectiveness. The second review (EPA Citation2020) referred to the full suite of SEA effectiveness dimensions (Therivel and González Citation2019), but still with a noticeable focus on procedural effectiveness. Of the 62 review criteria, 31 (50%) were procedural; 13 (21%) related to consultation (pluralist); and the remaining 18 (29%) criteria were shared between four effectiveness dimensions (i.e. context, substantive, normative and transactive), with no criteria for knowledge and learning.

Key performance indicators (KPIs) are routinely used in many sectors to measure and report on the impacts of specific products or activities, for example food waste (De Menna et al. Citation2018), building energy efficiency retrofits (McGinley et al., Citation2022), construction (Chan and Chan Citation2004), transport systems (Djordjevic and Krmac, Citation2016), sustainability performance of buildings (Balaras et al., Citation2020), and business (Holliday, Citation2021, Pichler, Citation2015). KPIs tend to be linked to objectives that aim to ensure the successful competitive performance of an organization (Rockart Citation1979) which typically entails efficiency, customer satisfaction, and decreased waste and costs. Much of the literature around KPIs is ‘grey’ in that it is written by and for industry and business people, but much can be learned from how KPIs have been developed in these non-SEA contexts.

These considerations can provide, for example, a quantifiable framework by which the effectiveness (or performance) of SEA implementation and practice can be examined and compared not only across time but also across planning hierarchies, sectors, and jurisdictions. Such an examination can be done by looking at the efficiency of procedures, stakeholders’ and the public satisfaction with the assessments’ outcomes, and the cost-effectiveness of assessments. They can be further expanded by bringing in strategic, tactical and operational considerations often found in the military and security sector literature on KPIs (e.g. Abreu and Piedade Citation2018, Sukman Citation2016, Zvelo Citation2021). For example, strategic KPIs consider high-level objectives and indicators relating to long-term decisions. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis can be useful for identifying weaknesses and threats that could define strategic priorities and thus inform the selection of KPIs (e.g. strengthening the policy context or enhancing SEA awareness). Tactical KPIs relate to actionable aspects, with specific resources, responsibilities and timelines to improve effectiveness (e.g. training is provided ahead of SEA so that decision-makers understand the environmental impacts of planning alternatives). Operational KPIs focus on short-term actions needed to achieve tactical and strategic goals (e.g. SEA environmental report considers a range of reasonable alternatives, and clearly presents the environmental impacts of each alternative).

Therefore, the above relevant considerations from the wider international literature on KPIs are, arguably, of relevance for the development of a framework for the systematic identification and selection of SEA effectiveness KPIs. Moreover, the techniques that are commonly used to support such frameworks are also transferable to the SEA context. These include literature reviews, brainstorming, interviews, surveys with experts, workshops, and building off previous KPI lists (Bentaleb et al. Citation2015, Enshassi and El Shorafa Citation2015, Holliday Citation2021, Jahangirian et al. Citation2017, Pichler Citation2015). Expert consultation approaches have already been successfully applied in the selection of SEA indicators (e.g. Dwyer et al. Citation2014, Donnelly et al. Citation2007, Hanna and Noble Citation2015, O’Regan et al. Citation2016) which is encouraging. In addition to the systematic, transparent and often collaborative approaches to the identification of KPIs in non-SEA contexts, the KPI literature also puts emphasis on developing concise, implementable and measurable indicators around priority performance goals, that allow progress and trends to be evaluated against set targets (Holliday Citation2021, Pichler Citation2015).

In light of the above, this paper describes a framework for the systematic selection of KPIs for SEA effectiveness in Ireland which has been developed taking into account criteria and techniques from other areas and sectors. It starts by describing the methodological approach used, and outlines key findings for each methodological step. Section 3 presents the final set of KPIs, which are expected to be used in EPA reviews of individual SEA processes, and in Ireland’s periodic reviews of SEA effectiveness (e.g. EPA Citation2012, Citation2020). Section 4 then discusses how the KPIs relate to wider issues of SEA effectiveness and associated good performance targets. The paper aims to contribute to debates on SEA effectiveness, highlighting the importance of transparent and systematic approaches to KPI selection, the importance of giving weight to underperforming dimensions of SEA effectiveness, and the relevance of providing clear metrics to facilitate their application and interpretation.

2. KPI selection framework and key findings

The identification and selection of KPIs for SEA effectiveness followed a systematic and comprehensive approach to first capture good current international practice on SEA effectiveness criteria, and then prioritise these based on a review of the state-of-the-art and expert input, and ultimately tailor it to the local context, in this case, to Irish SEA practice. It involved two parallel processes: 1) a bottom-up identification and sifting (based on strengths-weaknesses resulting from a SWOT analysis of current Irish practice) from a long list of international SEA effectiveness indicators; and 2) a top-down consideration of the goals that the KPIs should prioritise, extracted from both the literature review and expert input. illustrates the sieving approach.

Figure 1. Flow diagram of the methodological bottom-up (1 to 3) and top-down (4 to 7) steps for sieving international SEA effectiveness indicators to select Key Performance Indicators for Ireland. ‘n’ refers to the total number of indicators at that methodological step, with specific numbers provided for each SEA effectiveness dimension.

Figure 1. Flow diagram of the methodological bottom-up (1 to 3) and top-down (4 to 7) steps for sieving international SEA effectiveness indicators to select Key Performance Indicators for Ireland. ‘n’ refers to the total number of indicators at that methodological step, with specific numbers provided for each SEA effectiveness dimension.

2.1. Step 1: Literature review

The basis of the framework was a thorough international literature review to identify existing criteria and indicators of SEA effectiveness. The search was based on the Web of Science and Scopus databases and the Google Scholar academic index, using combinations of the words ‘criteria’, ‘indicator’, ‘effectiveness’, ‘performance’, ‘strategic environmental assessment’, ‘impact assessment’ and ‘plan’. The resulting 1,115 articles were sieved to eliminate repetitions, articles that did not relate to SEA, and articles that focused on SEA indicators rather than indicators of SEA effectiveness. The remaining 148 articles were reviewed.

The indicators of SEA effectiveness listed in the articles were often phrased as questions or broad generic statements, and so had to be ‘reworded’ as clear indicators for consistency (e.g. ‘has the Scoping Report been made public?’ was reworded to ‘public availability of scoping report’), as well as amalgamation and comparability (e.g. ‘existence of specialised scientific units supporting SEA’ and ‘enforcement capabilities (workers trained in key environmental fields)’ were both reworded as ‘existence of specialised SEA personnel’). Overlaps between the indicators were then eliminated to come up with a list of distinct indicators. These are provided as supplementary material. They represent the most recent comprehensive inventory of SEA effectiveness indicators published in the literature and, as such, provide a significant resource for effectiveness reviews and research.

The preponderance of procedural indicators (123 out of 293 distinct indicators, 42%) is notable, and reflects the focus of past SEA effectiveness studies. Context was also a major focus of SEA effectiveness research, primarily considering whether appropriate SEA legislation and organisational structures are in place (e.g. Huang et al. Citation2017). Arguably this is no longer an issue in most European and many other jurisdictions. The review suggested that the substantive, normative, knowledge and learning and transactive dimensions have been historically under-studied.

2.2. Steps 2 and 3: Bottom-up filtering of indicators – SWOT Analysis

The distinct indicators were then filtered down on the basis of current key weaknesses and threats in Irish practice. These derived from a SWOT analysis based on the second and most recent SEA effectiveness review for Ireland (EPA Citation2020): summarises key considerations, and the supplementary material provides a full commentary on identified weaknesses and threats. Sieving using this approach eliminated some KPIs, notably context and procedural indicators where Irish practice is already good, for instance existence of a national SEA review mechanism, SEA targeting the right plan-making level, and clarity of screening documentation (EPA Citation2020).

Table 1. SWOT summary of the findings of the second Irish SEA review (Based on EPA Citation2020).

2.3. Step 4: Top-down review of indicators – SEA Effectiveness Goals

A second sieve of indicators focused on those indicators that align most closely with SEA strategic/tactical/operational goals. As there is no SEA literature on these topics, their identification was carried out through brainstorming by the research team, based on the team members’ practical experience of carrying out SEAs, and their research experience on SEA effectiveness ().

Table 2. Strategic, tactical and operational goals of SEA (from the literature and brainstorming by the project team).

This process identified some indicators that should particularly be considered for inclusion as KPIs (e.g. public engagement, achievement of environmental targets/standards) and some that could be excluded (e.g. clarity of SEA scoping documentation, since this is anyway covered in SEA environmental reports in Ireland). This experience- and expertise-based analysis also highlighted the importance of lessons learned by plan-makers (knowledge and learning), and the costs and benefits of SEA (transactive). Indicators related to these were consequently reviewed and developed further, resulting in some new indicators not previously considered in the literature.

2.4. Step 5: Top-down review of indicators – Interviews with experts

The top-down approach also included expert input. Semi-structured interviews were carried out with 20 SEA and planning experts (10 Irish, 10 international) once ethical approval was secured. The interviews sought, among other things, to establish which are the leading issues and priority areas for improvement in SEA practice and thus, help to focus the KPIs.

The interviews included an open question ‘What are key elements of good/best practice in SEA?’. The responses are summarised in . The most frequent response was SEA contributing to environmental integration into the plan and informing decision-making (9 responses). This was described as the ‘Holy Grail’ by one respondent, another noting that: ‘the SEA is there to help make the plan better’. The next most frequently highlighted considerations were: a robust, up-to-date baseline (7 responses); early engagement and iterative communication between the plan and SEA teams (5 responses); and various elements of good process, such as good screening, scoping, impact assessment and mitigation (5 responses). Respondents noted that collaborative working between the SEA and plan-making teams, with the SEA team ‘telling the planning team what they need to hear and not what they want to hear’, was important, particularly in the Irish context. Effective monitoring which actively influences the next cycle of plan-making (4 responses) was felt to be essential to learning what has worked and not, and improve future assessments and plans.

Table 3. Elements of good/best SEA practice identified by interviewees (n.B. At this stage of the interview, respondents were not explicitly thinking in terms of effectiveness dimensions as the question on SEA effectiveness dimensions followed from this, but the responses have been categorised here by the authors to reflect the research framework).

Another question referred to the dimensions of SEA effectiveness, explaining these to interviewees before asking them which of these they thought to be most important (refer to section 4 and for more detail). There was a broad correlation between the two questions, with elements related to the procedural and substantive dimensions being listed most frequently and these dimensions being considered most important, and with much less focus on the other SEA effectiveness dimensions, particularly the transactive dimension.

Figure 2. SEA effectiveness dimensions identified by interviewees and survey respondents.

Figure 2. SEA effectiveness dimensions identified by interviewees and survey respondents.

The interviewees unanimously felt that there should be no more than 10 KPIs, observing that ‘less is more’ and that ‘if they are meaningful, they will be able to tell the story’. Given these findings, an aim was established within this research, to bring the long list of 293 indicators down to 10 or fewer.

2.5. Step 6: Top-down review of indicators – Online survey

An online questionnaire was also circulated to 284 SEA experts (i.e. authors of the 148 articles included in the literature review) and posted on the website of the International Association for Impact Assessment for three weeks, with 41 responses in total. The survey presented a list of 20 possible indicators from which respondents could choose those that were most relevant for measuring effectiveness (). These were preliminarily selected using the bottom-up (i.e. SWOT analysis) and top-down (i.e. strategic/tactical/operational goals plus the expert interviews) approaches described above, with the intention of putting them to further scrutiny via this survey.

Table 4. Most relevant indicators for measuring SEA effectiveness according to international survey respondents (n.b. The related effectiveness dimensions were not included/presented in the online survey).

The most often-chosen indicators related to consideration of alternatives (10%), proposed monitoring framework (9%), actual monitoring carried out (9%), and SEA’s impact on the plan (8%). Again, procedural indicators garnered the most votes, then the substantive, pluralist, and knowledge and learning dimensions. Consistent with the interview findings, the transactive dimension (SEA benefits and costs) had the lowest take-up (2%). Also consistent was that 52% of the survey respondents recommended having between 1 and 10 KPIs; 42% between 11 and 20 KPIs; 6% between 21 and 30 KPIs, with nobody opting for more than 30 KPIs.

2.6. Steps 7: Expert review

The final formal step of the top-down approach involved an additional expert review by the project steering group and the research team’s two international SEA experts. They commented on the preliminary set of 20 indicators for Ireland (), taking into account the priority indicators selected by the international survey respondents. They further refined the indicators to ensure that they are clearly defined, understandable, representative and encompassing of the SEA effectiveness dimensions. For example, in this last step, several indicators on monitoring were amalgamated and reframed, and a new KPI on normative SEA effectiveness was added. The normative KPI did not come from the literature: rather it came from the experts’ suggestions for how this dimension might be considered.

A final, unanticipated change to the KPIs was made in response to an online training course about the KPIs, which was attended by more than 50 national and international SEA practitioners and researchers. During the course, participants were asked to discuss in groups whether the draft KPIs on the knowledge/learning and transactive dimensions – which would have involved interviewing the planning team after they had completed the SEA – were feasible. One group recommended that planning teams should document their learning, and their thoughts about costs and benefits, as part of carrying out the SEA, rather than in post hoc interviews. Although this goes beyond the legal requirements of the SEA Directive and Irish legislation, it would be easier to evaluate, and more likely to lead to improved learning by the planning team. This was then adopted for KPIs 9 and 10.

3. Selected SEA Effectiveness KPIs

shows the 10 KPIs of SEA effectiveness that resulted from the study, and data sources for these KPIs. KPI 1 (public availability of SEA documents) represents a necessary precursor to effective public participation and subsequent monitoring of the plan’s impacts. KPI 2 (‘within plan’ alternatives) is a key component to effective development of a plan that has minimal significant environmental impacts. This criterion was developed in response to Irish SEA practice, where SEA alternatives are typically variants of no plan, versus business as usual, versus the sustainable new plan. Irish guidance (EPA Citation2015) has stressed the importance of alternative components of the plan, as opposed to broad-brush alternatives to the plan, but existing practice still favours whole-plan alternatives. KPI 3 (cumulative impacts) is a key differentiator between plan SEA and project EIA; it is where SEA provides significant ‘value added’ to the assessment process.

KPIs 4 and 5 (proportion of recommendation by environmental authority and public taken on board) indicate how effectively the SEA’s public consultation requirements have been carried out and how responsive the plan-making team are to outside information about their plan’s environmental impacts. Any significant differences between KPIs 4 and 5 could also indicate a willingness or unwillingness to give proper weight to the public’s views, compared to the more formal and statutory views of environmental authorities. KPI 6 (changes to the plan in response to the SEA) is the indicator flagged up most frequently as describing SEA effectiveness: it indicates both that the SEA has been influential in ensuring that the plan has minimal negative impacts, and that the plan-making team have been responsive to information about their plan’s environmental impacts.

The indicator of normative effectiveness KPI 7 (‘level of SEA environmental contribution’) aims to capture the true environmental contribution of SEA. It is a qualitative judgement of: a) whether the SEA focuses on the key environmental impacts of the plan; b) whether the SEA tests the plan’s impacts against environmental targets; and c) whether the SEA leads to significant changes in the plan towards achieving the environmental targets. KPI 8 (monitoring and public availability of monitoring information) focuses on an aspect of SEA that has traditionally been poorly carried out in Ireland, but which has the potential to significantly improve subsequent rounds of plan-making.

KPI 9 (statement by the planning team about lessons learned) and KPI 10 (statement by the planning team about costs and benefits of SEA) will hopefully encourage plan-makers to reflect on the SEA process and its effectiveness. The costs of SEA are easy to quantify, but the benefits are much harder to measure. These two KPIs should challenge the plan-making teams to consider how to minimise the costs and maximise the benefits of SEA, with improved effectiveness in the future.

Table 5. Key Performance Indicators (KPIs) of SEA effectiveness in Ireland.

The KPIs also cover the strategic, tactical and operational levels: the strategic level broadly equates to the normative and knowledge/learning dimensions; the tactical level to the substantive and transactive dimensions; and the operational level to the procedural and pluralist dimensions. The ‘context’ dimension can be understood as the constraints and opportunities within which the organisation aims to achieve its strategic, tactical and operational goals.

4. SEA Effectiveness Dimensions: Importance and Links

In addition to being asked to propose elements of good/best SEA practice, the 20 expert interviewees were asked to identify which – they could identify more than one – of the seven effectiveness dimensions they thought were most relevant to SEA effectiveness. The results were, in order of priority: substantive (10 respondents); procedural (7); pluralist (5); normative (5); knowledge and learning (3); context (2); and transactive (1). There is a notable difference between this more deliberate consideration of SEA effectiveness dimensions; the results of the unprompted and open questions (); and the survey which also did not clearly focus on SEA effectiveness dimensions (). Where SEA effectiveness dimensions are expressly considered – the middle row in – there is noticeably less emphasis on the procedural dimension and noticeably more on the normative dimension. One expert queried whether transactive effectiveness needs its own indicator ‘given that nobody really seems to worry too much about that topic’, although another interviewee noted that ‘Financial costs are key and so are resources, staff and expertise assigned to it’.

Given the small number of respondents, this shift from procedural to normative should not be taken as a general trend, but it does suggest that KPIs organised around dimensions of SEA effectiveness may help people to focus on SEA effectiveness dimensions that they would otherwise unconsciously disregard. This is consistent with Acharibasam and Noble’s (Citation2014) statement that a judicious use of SEA effectiveness KPIs could support a greater focus on the longer-term benefits of SEA, including the added-value of SEA beyond any one specific SEA process. It also reinforces the validity of using SEA effectiveness dimensions to underpin the selection and development of KPIs.

Some interviewees drew links between different dimensions of SEA effectiveness. Advocating for the substantive dimension, one respondent noted that SEAs can also improve the transparency (pluralist effectiveness) of plan-making:

SEA should make a difference. There is a risk of doing wonderful SEA work while still not influencing the outcome of the plan. That is more likely when there is strong political commitment to other objectives which leaves little room for the influence of SEA. But in those cases, there is a really important role for SEA to highlight that, so even if we don’t achieve this substantive objective, we achieve transparency.

Another interviewee also supported the substantive dimension, but linked this to the transactive dimension:

The SEA process typically results in some level of substantive change but it’s not always clear to me that it was worth the time and effort. And sometimes the changes are not best for the [plan] but done to acquiesce to the most ardent opposition. This gets the heart of the transactional element: is this really creating a better, more efficient end?

5. Reflections on KPI Selection and Implementation

The methodology described in this paper has resulted in 10 KPIs that cover the full range of SEA effectiveness dimensions. The process of triangulation – coming at the issue from different points of view, and especially from top-down as well as bottom-up – has led to a limited set of useful KPIs. This includes three new KPIs (8, 9 and 10) that clearly address the normative, knowledge/learning, and transactive effectiveness dimensions.

KPIs should ideally be SSMART (Spatial, Specific, Measurable, Attainable and action-oriented, Relevant, and Time-bound) (Schomaker, Citation1997, González et al. Citation2011). However, in some cases it was difficult to find indicators that were both measurable and relevant. For instance, the transactive dimension of effectiveness could be partly measured by the time taken to carry out the SEA, the number of environmental indicators used, the number of meetings held, etc., but none of these truly measure the costs versus benefits of SEA. Similarly, it was difficult to find an indicator that covered normative effectiveness whilst also being measurable and time-bound: how can one determine, for instance, whether a single SEA, or even all SEAs together, have helped to improve health outcomes or increase resilience?

Different studies have used different numbers of indicators of SEA effectiveness, ranging from 14 (Dwyer et al. Citation2014) to 50 or more (Dalal-Clayton and Sadler Citation2017, EPA Citation2020, Zhang et al. Citation2013). A larger number gives a more comprehensive understanding of effectiveness but requires significant time and resources for monitoring and review. A smaller number is easier to manage and communicate, and cheaper to collate, but gives a more limited understanding of effectiveness. This study identified more than 200 distinct indicators of effectiveness, but the experts consistently recommended having no more than 10 KPIs. Trying to provide a robust indication of SEA effectiveness in 10 KPIs could only be done through a ruthless elimination of tangential indicators, plus a clear focus on how the SEA effectiveness dimensions could truly be represented, prioritising crucial problem areas.

The SEA monitoring literature recommends re-using data already collected for other purposes, as this not only saves time and money, but allows the use of indicators that have a historical series of records and facilitate comparability over time (e.g. da Silva et al. Citation2019). Indeed, the initial research plan had been to re-use data from the two previous Irish SEA effectiveness studies (i.e. EPA Citation2012, Citation2020) to populate the selected KPIs. However, none of the 10 KPIs is clearly covered by the criteria used in either of the previous studies and, therefore, require setting up a new baseline. Several of the KPIs selected involve reviewing documents prepared after plan adoption (SEA Statements, monitoring reports), and interviewing the planning team. While this undeniably adds complexity to their monitoring, it helps to ensure that the KPIs are meaningful and truly representative of SEA effectiveness in Ireland.

6. Next Steps and Conclusions

This paper has described the development of an SEA effectiveness KPI framework for Ireland through a literature review of current international good practice, SWOT analysis and expert consultations. The resulting KPIs can be used during the SEA process, for plan-makers/consultants to do a self-check; during the SEA document review process, for consultees/objectors to check on the SEA process and resulting documentation as well as on plan contents; and during periodic national SEA performance reviews, to check on whether SEA is achieving its objectives efficiently and effectively. Even though individual SEA processes can be very different (e.g. with different numbers of public comments or levels of integration of mitigation measures), the 10 KPIs are scalable and transferable across planning hierarchies and sectors in Ireland.

The KPI framework has been populated, using 22 randomly-selected Irish SEAs – to establish a baseline from which to measure and compare any future improvements in SEA effectiveness in Ireland. This has given a realistic picture of just how effective – and not – Irish SEAs are, and perhaps more importantly, provide a consistent basis from which to monitor SEA effectiveness going forward.

Given that the KPIs were developed in part around a SWOT analysis of SEA practice in Ireland and interviews with Irish SEA practitioners, they may well not be directly relevant to other jurisdictions. In fact, different authors have emphasised the need to adapt KPIs to different scales and contexts (e.g. Eales and Sheate Citation2011, Fischer and Gazzola Citation2006). Nevertheless, the systematic approach used to develop the KPIs can be used elsewhere, and the KPIs themselves are an example of well-rounded ‘contemporary’ KPIs that review the full range of SEA effectiveness dimensions. The combination of bottom-up and top-down approaches involving a literature review and the views of SEA experts and government officials is transferable to other jurisdictions. The strategic, tactical and operational goals identified for SEA are also transferable, and the international interviews and survey responses capture cross-cutting issues. This approach to identifying KPIs addresses a current practice gap associated with the commonly implicit and unstructured selection of effectiveness indicators. The adoption of the KPI selection framework presented in this paper can contribute to a more systematic selection of SEA effectiveness indicators that, in turn, can contribute to harmonising how SEA effectiveness is measured and therefore support comparability of effectiveness across different SEA systems.

Supplemental material

KPIs_DATABASE_SystematicReview_Complete31stJuly2023.xlsx

Download MS Excel (75.9 KB)

Acknowledgments

This research was funded by the Irish Environmental Protection Agency (EPA) and the Office of the Planning Regulator under grant reference no. Ref: 2021-NE-1061. The authors wish to thank the interviewed experts, the international survey respondents, and the project steering committee for their time and valuable insights. Ethics approval reference: HS-E-22-60-Gonzalez.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/14615517.2024.2355706

Additional information

Funding

The work was supported by the Environmental Protection Agency and Office of Planning Regulator [2021-NE-1061].

References

  • Abreu A, Piedade R. 2018. Decision support model in the strategic management of the Portuguese air force alpha jet fleet. Asian Journal Of Social Sciences And Management Studies. 5(3):72–81. doi: 10.20448/journal.500.2018.53.72.81.
  • Acharibasam JB, Noble BF. 2014. Assessing the impact of strategic environmental assessment. Impact Assess Proj Apprais. 32(3):177–187. doi: 10.1080/14615517.2014.927557.
  • Balaras CA, Droutsa KG, Dascalaki EG, Kontoyiannidis S, Moro A, Bazzan E. 2020. A transnational multicriteria assessment method and tool for sustainability rating of the built environment. Earth And Environmental Science. 410:012068. doi: 10.1088/1755-1315/410/1/012068.
  • Bentaleb F, Mabrouki C, Semma A. 2015. Key performance indicators evaluation and performance measurement in dry port-seaport system: A multi criteria approach. Journal Of ETA Maritime Science. 3(2):97–116. doi: 10.5505/jems.2015.88597.
  • Bina O, Jing W, Brown L, Partidario MR. 2011. An inquiry into the concept of SEA effectiveness: Towards criteria for Chinese practice. Environ Impact Assess Rev. 31:572–581. doi: 10.1016/j.eiar.2011.01.004.
  • Bond A, Pope J, Morrison-Saunders A, Retief F. 2022. Exploring the relationship between context and effectiveness in impact assessment. Environ Impact Assess Rev. 97:106901. doi: 10.1016/j.eiar.2022.106901.
  • Bond A, Retief F, Cave B, Fundingsland M, Duinker PN, Verheem R, Brown AL. 2018. A contribution to the conceptualization of quality in impact assessment. Environmental Impact Assessment Review. 68:49–58. doi: 10.1016/j.eiar.2017.10.006.
  • Cape L, Retief F, Lochner P, Fischer T, Bond A. 2018. Exploring pluralism – Different stakeholder views of the expected and realised value of strategic environmental assessment (SEA). Environ Impact Assess Rev. 69:32–41. doi: 10.1016/j.eiar.2017.11.005.
  • Cashmore M, Bond A, Cobb D. 2008. The role and functioning of environmental assessment: theoretical reflections upon an empirical investigation of causation. Journal Of Environmental Management. 88:1233–1248. doi: 10.1016/j.jenvman.2007.06.005.
  • CEC (Commission of the European Communities). 2001. Directive 2001/42/EC on the assessment of the effects of certain plans and programmes on the environment. [accessed 14 Sep 2023]. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32001L0042.
  • Chan APC, Chan APL. 2004. Key performance indicators for measuring construction success. Benchmarking: Int J. 11(2):203–221. doi: 10.1108/14635770410532624.
  • Chanchitpricha C, Bond A. 2013. Conceptualising the effectiveness of impact assessment processes. Environ Impact Assess Rev. 43:65–72. doi: 10.1016/j.eiar.2013.05.006.
  • Dalal-Clayton DB, Sadler B. 2017. A methodology for reviewing the quality of strategic environmental assessments in development cooperation. Impact Assess Proj Apprais. 35(3):257–267. doi: 10.1080/14615517.2017.1322811.
  • Da Silva AWL, Netto M, Selig PM, de Ávila Lerípio A, de Ávila Lerípio A. 2019. A framework for governance of sustainability indicator systems in strategic environmental assessment processes. J Environ Assess Policy Manage. 21(1):1950007. doi: 10.1142/S1464333219500078.
  • De Menna F, Dietershagen J, Loubiere M, Vittuari M. 2018. Life cycle costing of food waste: A review of methodological approaches. Waste Manage (Oxford). 73:1–13. doi: 10.1016/j.wasman.2017.12.032.
  • Djordjevic B, Krmac E. 2016. Key performance indicators for measuring the impacts of ITS on transport. ISEP Conference. [accessed 14 Sep 2023]. https://www.researchgate.net/publication/302909689_Key_performance_indicators_for_measuring_the_impacts_of_its_on_transport.
  • Donnelly A, Jones M, O’Mahony T, Byrne B. 2007. Selecting environmental indicator for use in strategic environmental assessment. Environ Impact Assess Rev. 27(2):161–175. doi: 10.1016/j.eiar.2006.10.006.
  • Dwyer N, O’Mahony T, O’Regan B, Moles R. 2014. Developing key performance indicators for the effectiveness of strategic environmental assessment in Ireland. Conference proceedings of International Association for Impact Assessment conference 2014, Viña del Mar; [accessed 14 Sep 2023]; Chile. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.565.6259&rep=rep1&type=pdf.
  • Eales RP, Sheate WR. 2011. Effectiveness of policy level environmental and sustainability assessment: Challenges and lessons from recent practice. J Environ Assess Policy Manage. 13(1):39–65. doi: 10.1142/S146433321100378X.
  • Enshassi AA, El Shorafa F. 2015. Key performance indicators for the maintenance of public hospitals buildings in the Gaza Strip. Facilities. 33(3/4):206–228. doi: 10.1108/F-07-2013-0053.
  • EPA. 2015. Developing and assessing alternatives in strategic environmental assessment. Ireland: EPA; [accessed 14 Sep 2023]. https://www.epa.ie/publications/monitoring–assessment/assessment/strategic-environmental-assessment/developing-and-assessing-alternatives-in-strategic-environmental-assessment-sea.php.
  • EPA. 2020. Second review of strategic environmental assessment effectiveness in Ireland. Report Nº 306. EPA Research Programme 2014–2020. [accessed 14 Sep 2023]. http://www.epa.ie/publications/research/environmental-technologies/Research_Report_306.pdf.
  • EPA (Environmental Protection Agency). 2012. Review of Effectiveness of SEA in Ireland. Ireland: EPA; [accessed 14 Sep 2023]. https://www.epa.ie/publications/monitoring–assessment/assessment/strategic-environmental-assessment/review-of-effectiveness-of-sea-in-ireland—main-report.php.
  • Fischer TB, Gazzola P. 2006. SEA effectiveness criteria—equally valid in all countries? The case of Italy. Environ Impact Assess Rev. 26(4):396–409. doi: 10.1016/j.eiar.2005.11.006.
  • González A, Donnelly A, Jones M, Klostermann J, Groot A, Breil M. 2011. Community of practice approach to developing urban sustainability indicators. J Environ Assess Policy Manage. 13(4):591–617. doi: 10.1142/S1464333211004024.
  • Hanna K, Noble BF. 2015. Using a Delphi study to identify effectiveness criteria for environmental assessment. Impact Assess Proj Apprais. 33(2):116–125. doi: 10.1080/14615517.2014.992672.
  • Holliday M. 2021. How to choose the right KPIs for your business. [accessed 14 Sep 2023]. https://www.netsuite.com.au/portal/au/resource/articles/business-strategy/how-to-choose-kpis.shtml.
  • Huang Y, Fischer TB, Xu H. 2017. The stakeholder analysis for SEA of Chinese foreign direct investment: the case of ‘one belt, one road’ initiative in Pakistan. Impact Assess Proj Apprais. 35(2):158–171. doi: 10.1080/14615517.2016.1251698.
  • Jahangirian M, Taylor SJE, Young T, Robinson S. 2017. Key performance indicators for successful simulation projects. J Oper Res Soc. 68:747–765. doi: 10.1057/jors.2016.1.
  • Jha-Thakur U, Gazzola P, Peel D, Fischer TB, Kidd S. 2009. Effectiveness of strategic environmental assessment – the significance of learning. Impact Assess Proj Apprais. 27(2):133–144. doi: 10.3152/146155109X454302.
  • Li T, Want H, Dent B, Ren W, Xu H. 2016. Strategic environmental assessment performance factors and their interaction: An empirical study in China. Environ Impact Assess Rev. 59:55–60. doi: 10.1016/j.eiar.2016.03.008.
  • McGinley O, Moran P, Goggins J. 2022. An assessment of the key performance indicators (KPIs) of energy efficient retrofits to existing residential buildings. Energies. 15(1):334. doi: 10.3390/en15010334.
  • O’Regan B, Moles R, O’Mahoney T. 2016. A proposed framework for measuring SEA effectiveness in Ireland using key performance indicators, report. Science, Technology, Research And Innovation For The Environment (STRIVE) Programme 2007–2013.
  • Pichler R. 2015. 10 tips for using key performance indicators. [accessed 14 Sep 2023]. https://www.romanpichler.com/blog/10-tips-for-product-key-performance-indicators-kpis/.
  • Pope J, Bond A, Cameron C, Retief F, Morrison-Saunders A. 2018. Are current effectiveness criteria fit for purpose? Using a controversial strategic assessment as a test case. Environ Impact Assess Rev. 70:34–44. doi: 10.1016/j.eiar.2018.01.004.
  • Rockart J. 1979. Chief executives define their own data needs. Harv Bus Rev. 52(2):81–93.
  • Runhaar H, Driessen PJ. 2007. What makes strategic environmental assessment successful environmental assessment? The role of context in the contribution of SEA to decision-making. Impact Assess Proj Apprais. 25(1):2–14. doi: 10.3152/146155107X190613.
  • Sadler B. 1996. Environmental assessment in a changing world: Evaluating practice to improve performance. Ottawa: minister of supply and services Canada. [accessed 14 Sep 2023]. https://trid.trb.org/view/653262.
  • Say N, Herberg A. 2016. The contribution of public participation to good-quality strategic environmental assessment. Fresenius Environ Bull. 25(12a):5751–5757.
  • Schomaker M. 1997 Jan 25–26. Development of environmental indicators in UNEP. Paper presented at the land quality indicators and their use in sustainable agriculture and rural development. 35–36. accessed 14 Sep 2023, Rome. FAO. http://www.fao.org/docrep/W4745E/w4745e07.htm.
  • Sukman D. 2016. The institutional level of war. The strategy bridge. [accessed 14 Sep 2023]. https://thestrategybridge.org/the-bridge/2016/5/5/the-institutional-level-of-war.
  • Therivel R, González A. 2019. Introducing SEA effectiveness. Impact Assess Proj Apprais. 37(3–4):181–187. doi: 10.1080/14615517.2019.1601432.
  • Van Buuren A, Nooteboom S. 2009. Evaluating strategic environmental assessment in the Netherlands: content, process and procedure as indissoluble criteria for effectiveness. Impact Assess Proj Apprais. 27(2):145–154. doi: 10.3152/146155109X454311.
  • Wang H, Bai H, Liu J, Xu H. 2012. Measurement indicators and an evaluation approach for assessing strategic environmental assessment effectiveness. Ecol Indic. 23:413–420. doi: 10.1016/j.ecolind.2012.04.021.
  • Zhang J, Christensen P, Kørnøv L. 2013. Review of critical factors for SEA implementation. Environ Impact Assess Rev. 38:88–98. doi: 10.1016/j.eiar.2012.06.004.
  • Zvelo. 2021. The strategic, operational and tactical levels of cyber threat intelligence. [accessed 14 Sep 2023]. https://zvelo.com/strategic-operational-tactical-cyber-threat-intelligence/.