2,638
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Strategic environmental assessment monitoring: the enduring forgotten sibling

ORCID Icon
Pages 168-176 | Received 30 Jun 2021, Accepted 10 Jan 2022, Published online: 31 Jan 2022

ABSTRACT

Monitoring in Strategic Environmental Assessment (SEA) is central to determining whether the assessment outcomes and associated mitigation measures have had the desired effect and the environment has been protected on the ground. Despite the critical role this procedural stage plays, and the international attempts to advance practice in this area, monitoring continues to be poorly performed 20 years on from the implementation of the European SEA Directive. Several frameworks and approaches can be found in the academic literature, but many are conceptual and arguably fail to provide pragmatic unsophisticated solutions that are readily implementable in practice. This paper attempts to address this by first outlining common issues reported in the academic literature in the last two decades, then highlighting ongoing issues, as identified in two recent Irish research projects, such as poor definition of monitoring indicators and lack of integration of monitoring commitments into plans/programmes. It then puts forward a set of recommendations intended to foster a practical and sensible approach to kick monitoring into action.

1. Introduction

Strategic Environmental Assessment (SEA) monitoring is a mandatory requirement under European and certain international law (EC Citation2001; Dalal-Clayton and Sadler Citation2005). It is recognised as a good practice principle (IAIA Citation2002) and essential for assessment accountability and learning (Jiricka-Pürrer et al., Citation2021; Persson and Nilsson Citation2007). SEA monitoring can be defined as the process undertaken post-assessment and during plan/programme implementation to understand the actual – as opposed to predicted – outcomes of the plan/programme and SEA. It relies on observations to detect, understand and evaluate changes in the physical environment (Azcárate et al. Citation2013).

Monitoring is considered a key component of SEA ‘follow-up’ defined as the ‘monitoring and evaluation of the impacts of a project or plan (…) for management of, and communication about, the environmental performance of that project or plan’ (Morrison-Saunders and Arts Citation2004, p.4). It is the ultimate procedural step, and a cornerstone for both SEA and follow-up, that helps check whether SEA is tangibly effective (). The prime function of monitoring in SEA follow-up is to provide data and information on the actual effects of implementing a plan/programme to help determine whether the SEA predictions were correct and potential significant effects have been mitigated (Morrison-Saunders and Arts Citation2004; Gachechiladze-Bozhesku and Fischer Citation2012).

Figure 1. Monitoring as a cornerstone in Strategic Environmental Assessment (SEA) and SEA follow-up.

Figure 1. Monitoring as a cornerstone in Strategic Environmental Assessment (SEA) and SEA follow-up.

Monitoring data can be used to check whether SEA recommendations and mitigation measures have been implemented and how effective these are at protecting the environment. Monitoring can help identify unforeseen effects (and support timely remedial action), address assessment uncertainties and fill data gaps identified during the SEA – at planning level but also at project implementation level (Morrison-Saunders and Arts Citation2004; Gachechiladze-Bozhesku and Fischer Citation2012; Azcárate et al. Citation2013) In other words, without monitoring, it is not possible to know and indeed understand the consequences of SEA. Without monitoring, it is very difficult to satisfy the substantive dimension of SEA effectiveness and establish whether SEA has resulted in environmental protection. It is not possible either to ascertain whether tiering in environmental assessment has been effective, that is, whether SEA’s role in informing planning decisions and indeed project development and associated Environmental Impact Assessments (EIAs) has been fulfilled (González and Therivel Citation2021). And therefore, whether any causal links can be made from ‘decision’ to ‘implementation’ (Fischer and Retief Citation2021; Therivel and González Citation2021). Similarly, without monitoring, it is difficult to learn from experience to continue enhancing assessment processes and outcomes (Jiricka-Pürrer et al., Citation2021; Partidário and Fischer Citation2004; Persson and Nilsson Citation2007; Thérivel and González, Citation2019).

Arguably, there is little point in SEA without monitoring as the effectiveness of the assessment in protecting the environment remains unknown and learning is halted. Yet, monitoring continues to be the forgotten sibling in the environmental assessment family of steps and tools – forgotten yet essential. This paper aims to revisit the importance of monitoring, acknowledging the shortcomings identified in the international literature and in recent practice reviews. The goal of the paper is to update the state of the art, and thus to continue to foster discussions in this area by attempting to fill in certain knowledge gaps. Perhaps more importantly, the ultimate purpose of the paper is to put forward a series of reasonable pragmatic measures to foster action on monitoring that can also serve as the basis for future research on monitoring performance.

2. SEA monitoring in international research

Monitoring in environmental assessment has gained significant international attention in the academic literature, but more so on EIA than SEA. When applying the search string [‘Strategic Environmental Assessment’ OR ‘SEA’] AND [‘monitoring’ OR ‘follow-up’ OR ‘follow up’], with no other search limitations, Scopus renders 195 results, which when filtered further for their actual linkage to SEA renders 30 results. A thorough review of these papers identified 22 relevant manuscripts and two milestone books: one specific to environmental assessment follow-up by Morrison-Saunders and Arts (Citation2004), which is mostly focused on EIA but does include chapters on SEA follow-up, and one on SEA by Sadler et al. (Citation2012) that touches upon SEA monitoring. The identified publications date from 2003 to 2021. Interestingly, a particular intensive or concentrated attentiveness to SEA monitoring and follow-up is observed between 2007 and 2013 where 68% of the peer-reviewed journal articles were published (). Since 2013, SEA monitoring and follow-up seem to have lost research momentum; indeed, the later publications refer to SEA monitoring as part of wider SEA effectiveness reviews and there is a dearth of targeted research on the topic – albeit two recent papers bring some attention back to it (i.e. Gutierrez et al. Citation2021; Jiricka-Pürrer et al., Citation2021).

Figure 2. Year of publication of the SEA monitoring and follow-up publications identified in a targeted Scopus search.

Figure 2. Year of publication of the SEA monitoring and follow-up publications identified in a targeted Scopus search.

Three papers explore specific monitoring case studies (e.g. Azcárate et al. Citation2013; Gachechiladze-Bozhesk, Citation2012; Jiricka-Pürrer et al., Citation2021). Others present conceptual frameworks, including:

  • conformance, performance, management and dissemination approach for each planning tier proposed by Partidário and Fischer (Citation2004);

  • indicator-based plan monitoring developed by Mascarenhas et al. (Citation2012);

  • linking planning and programming processes through monitoring by Wallgren et al. (Citation2011).

  • micro- and macro-evaluations by Sadler (Citation2004);

  • multi-track approach with different levels of follow-up suggested by Partidário and Arts (Citation2005); and

  • tool kit including analytical and deliberative tools proposed by Nilsson et al. (Citation2009).

Most of the papers, however, report on SEA monitoring performance itself as portrayed in a range of good practice case studies and SEA effectiveness reviews (e.g. Gachechiladze et al. Citation2009; Söderman and Kallio Citation2009; Lundberg et al. Citation2010; Wallgren et al. Citation2011; Gachechiladze-Bozhesku and Fischer Citation2012; Lamorgese et al. Citation2015; Hadi et al. Citation2019; Gutierrez et al. Citation2021). In these publications, there is a general agreement that monitoring is weak.

3. Enduring challenges in monitoring practice

The international literature continues to report on monitoring shortcomings such as the absence of allocation of monitoring responsibilities (Polido et al. Citation2016) and questionable adequacy of monitoring measures (Gutierrez et al. Citation2021). Monitoring measures in SEA environmental reports are typically not linked to identified impacts, demonstrating ‘a lack of understanding as to the very purpose of monitoring’ (Söderman and Kallio Citation2009, p. 17). Monitoring generally focuses on observing whether the SEA recommendations and mitigation measures are implemented (Lundberg et al. Citation2010; Wallgren et al. Citation2011) and in very rare occasions they measure actual impacts on the physical environment (Azcárate et al. Citation2013). When they do, measurements and observations are commonly based on existing monitoring systems, following from the SEA Directive’s recommendation to use ongoing monitoring arrangements where appropriate, with a view to avoiding duplication of monitoring (EC Citation2001). Where existing monitoring systems are applied, these do not always cover the indicators and information needs of SEA (Hanusch and Glasson Citation2008; Gacheciladze et al., Citation2009).

Monitoring has been advocated as a means to fill in data gaps and address uncertainties, but only on very rare occasions has it been reported to be applied with that goal in mind (e.g. Azcárate et al. Citation2013; Polido et al. Citation2016). The large majority of the reviewed manuscripts (86%) highlight the need to nurture and strengthen monitoring and follow-up on SEA.

The above findings echo those of the recent European SEA REFIT programme (EC Citation2019) which also pointed to monitoring requirements being poorly implemented, including the identification of appropriate monitoring indicators. The REFIT report is unclear on whether Member States undertake monitoring systematically and on the frequency of monitoring. Although some Member States claim that monitoring reports are submitted ‘regularly’ for certain plans, the REFIT report does not state which countries cultivate such good monitoring practice. The findings also suggest a reliance on standard indicators (e.g. guidance-defined and/or associated to ongoing monitoring measures), with case-by-case monitoring indicators more often defined at lower planning tiers.

The REFIT study concludes that ‘there are challenges in the quality of monitoring the environmental effects of the implementation of plans or programmes, especially when it comes to identifying unforeseen effects and undertaking remedial action (…) environmental effects are not adequately monitored (…) and poor monitoring hinders the Directive’s success.’ (EC Citation2019, p. 46). Recommendations to address ongoing challenges include ‘a more explicit link between the SEA requirements of an individual plan or programme and existing monitoring activities, in order to avoid unnecessary duplication of these actions (e.g. by establishing an open national/regional database of environmental monitoring activities)’ and provision of ‘examples of successful [monitoring] implementation approaches, including strategies for ensuring [their] long-term sustainability’ (EC Citation2019, pp. 47 & 159).

3.1. Monitoring performance in Irish SEA practice

A research project looking at SEA effectiveness in Ireland (EPA Citation2018; González et al. Citation2019) included reviewing current monitoring practice in Irish SEAs to support the development of monitoring guidance. The review looked at 15 good practice SEA case studies across sectors and planning hierarchies (), and interviewed 30 national SEA practitioners and 13 international SEA experts to gather information on benefits/limitations of current SEA practice not captured in SEA Environmental Reports (SEA ERs), and to seek expert opinion on how to improve current practice.

Table 1. Case studies reviewed in the second review of SEA effectiveness in Ireland (EPA Citation2018)

The key procedural challenges identified in this effectiveness review were similar to those experienced in an earlier review (EPA Citation2012), notably the consideration of alternatives and monitoring. As with SEA practice internationally, monitoring remains the most significant gap in Irish SEA practice. A degree of ‘informal’ monitoring takes place in land-use planning, where the mandatory planning requirement to review development plans every 6 years and to formulate interim reports after 2 years forces planners to take stock of environmental changes – but this is not in a formal SEA monitoring sense. The current generally systemic lack of SEA monitoring hinders a comprehensive evaluation of impact avoidance and sustainable development due to SEA, even in cases where mitigation has been integrated into the final plan. The review unveiled the following ongoing monitoring deficiencies:

  • Monitoring typically focuses on plan/programme implementation (e.g. whether the plan/programme policies and actions have been realised within the planning period), rather than on the environmental impacts and/or changes resulting from plan/programme implementation, as per SEA requirements.

  • Monitoring indicators are often based on assessment objectives; reusing the assessment indicators and targets from the assessment (i.e. baseline or impact assessment criteria) as part of the proposed monitoring programme presents one of the key inadequacies in current SEA practice, as these may not capture key issues or may be too broad to inform monitoring at the local level.

  • Monitoring is regularly used as a form of mitigation (i.e. ‘monitor and manage’). This approach is appropriate for filling data gaps but should only be used as a mitigation measure of last resort as, if applied as such, it would allow impacts to become significant before they are identified.

  • The opportunity to use monitoring to specifically address identified data gaps (and thus help with the assessment of subsequent iterations of the plan/programme, and project-level assessments) is generally missed.

  • Monitoring periodicity, thresholds or remedial actions are commonly missing in the SEA monitoring programme.

  • Lack of clarity of monitoring responsibilities. Monitoring data tends to come from third-party bodies undertaking systematic monitoring of key indicators (e.g. the Irish Environmental Protection Agency – EPA – monitors water quality nationally). However, it is the plan-making authority that is responsible for collating and synthesising the relevant information and reporting on it in relation to their plan/programme, which is often not captured in SEA ERs.

  • Monitoring data from one round of plan-making do not seem to inform the next round of plan-making; SEA ERs jump straight into the baseline of the current plan, with no reference to what has happened since the adoption of the previous plan/programme.

  • Monitoring reports are not prepared and/or made publicly available. While some monitoring may be taking place, there is a lack of understanding of its occurrence and effectiveness as the findings are rarely made available.

To compound the issues above, consulted experts and practitioners highlighted several significant monitoring challenges including the fact that national legislation does not specify reporting requirements or assign any third-party authority with oversight/enforcement functions in relation to SEA monitoring. They also noted that responsibility for monitoring and remedial action is particularly difficult to assign where issues cross several agencies’ responsibilities. For example, in the case of water quality, the monitoring is undertaken by the EPA, wastewater upgrades are through Irish Water, and the pressure sources may ultimately be a combination of diffuse agricultural pollution, land-uses, industrial effluent and wastewater. How this is captured through a development plan/programme monitoring measures, and remediated, is not straightforward.

Consultees observed that knowledge exchange on monitoring, or on whether monitoring is taking or has taken place, if affected by personnel changes between planning cycles: institutional memory, for instance on where to find data or indeed the need to collate and analyse them, can be rapidly lost with changes of planning personnel. A lack of resources to carry out monitoring following the SEA/plan adoption, and the absence of guidance on monitoring were also considered key difficulties hindering practice. In particular, consultees recommended that guidance is needed to:

  1. clarify the actual objective of SEA monitoring (e.g. is it to monitor the plan/programme or the environment, and to what effect?);

  2. provide recommendations on the evidence needed to track changes resulting from a plan/programme; and

  3. showcase approaches that help address the complexity of interactions that make it difficult to determine whether any environmental changes occur as a result of a given plan/programme (due to multiple factors influencing overall environmental quality, and to amorphous links between planning hierarchies and sectors, where a plan/programme will influence plans/programmes/projects downstream but also across parallel sectors).

The guidance should also provide recommendations on the nature and level of monitoring. For instance, where plans/programmes lack geographic specificity or contain only high-level strategic objectives, monitoring should focus on national indicators to examine environmental trends; where plans/programmes contain detailed actions, monitoring should focus on cause-effect models to measure environmental effects more directly.

3.2. SEA and EIA monitoring links in Irish SEA practice

A recent project on the influence of SEA on EIA, or tiering in environmental assessment (EPA Citation2021; González and Therivel Citation2021) reviewed a separate set of nine SEAs and 12 associated EIAs (). Specific review criteria were included to evaluate monitoring links between SEA and EIA (e.g. checking whether the higher-tier SEA refers to monitoring data from previous strategic actions, the SEA monitoring refers to individual projects/EIAs, and the lower-tier EIA/SEA refers to the higher-tier SEA and its monitoring). The review was supplemented with interviews of 14 national and 14 international SEA practitioners, plan-makers, development control planners and academics. The interviews aimed to identify links between SEA and EIA, good practice, and suggestions for fostering and strengthening these links.

Table 2. Case studies reviewed in the tiering in environmental assessment research project (EPA Citation2021)

The review findings revealed again that, overall, tiering links between SEA and EIA monitoring measures are weak. Only one of the reviewed seven SEAs (i.e. Waterford City Development Plan 2013–2019 SEA ER) relates its monitoring programme to specific projects, with the rest keeping monitoring at the strategic level. Some have vague project-level monitoring references, such as the Clare Wind Energy Strategy SEA ER which recommends that monitoring information should be placed on Geographic Information Systems (GIS), and updated as data become available, for instance from EIA Reports (EIARs). In a number of cases, SEA ERs refer to monitoring as a means to fill in data gaps. For example, the Shannon Strategic Integrated Framework Plan 2013–2020 states that ‘the most significant data gaps which should be prioritised are bird surveys (…) together with cetacean monitoring (…) In order to supplement biodiversity data gaps, additional data gathering to be subsequently used during the plan review or at project level should be undertaken’ (SIFP SEA ER, p. 426). Of the 12 EIARs, only two have EIA monitoring measures that are influenced by monitoring at the higher tier. For example, the Cherrywood project (Mixed-use Town Centre Development 2017) EIAR includes monitoring measures in each chapter, and these are linked to the SEA. These shortcomings further emphasise the poor performance on the monitoring stage in environmental assessment practice.

The interviewed practitioners and experts attested to the importance of monitoring at both SEA and EIA levels and highlighted the role of monitoring in linking different tiers of assessments – that is, in supporting tiering in impact assessment. Strategic monitoring indicators can be brought down to the project level to follow up on the implementation of SEA mitigation measures, fill data gaps and identify unforeseen adverse effects. The monitoring information at EIA level can accumulate back up to inform the strategic monitoring indicator. However, there was also an acknowledgment that ‘monitoring programmes are included in the SEA ER and then almost forgotten about (…) Monitoring is the exception rather than the norm’ and that ‘monitoring is often insufficiently detailed or clear to inform EIA’. Similarly, a planner noted that SEA monitoring provisions may not be followed up because of lack of resources or, as an EIA consultant observed, because they are not ‘stitched into the policy requirements of the relevant plan’.

Two interviewees observed that collecting EIA information on sensitive issues that may have arisen at the project stage (e.g. extracting relevant information from the EIARs) to inform future SEAs would be a resource intensive task. One of them noted that ‘while EIAs contain a wealth of information in terms of dedicated long-term and seasonal surveys, analysis of baseline and historical monitoring information, much of this is not captured or collated in any coherent manner which can be made available for use in SEAs’.

None of the international interviewees gave examples of effective monitoring of either SEA or EIA. SEA monitoring, where carried out, seems to be of the environmental baseline – bringing together data that already exists elsewhere – rather than monitoring of the effectiveness of SEA mitigation (or of plan implementation). It was considered that, in Europe at least, this lacuna seems to be because there is no legal requirement for anyone to check the results of SEA or EIA monitoring, except in limited cases such as licensed facilities (regulated, for example, under Integrated Pollution Prevention and Control licensing). Yet, it was observed that monitoring will become mandatory under ongoing changes in climate so ‘developing climate change indicators that can be measured at various tiers could give SEA better footing.’

4. Discussion

On top of the recognition of its legal mandate, the importance and benefits of monitoring have been widely acknowledged (e.g. Azcárate et al. Citation2013; Gachechiladze-Bozhesku and Fischer Citation2012; Jiricka-Pürrer et al., Citation2021; Morrison-Saunders and Arts Citation2004; Persson and Nilsson Citation2007). In addition, monitoring is considered the stage that could best link SEA and EIA procedures; tiering through monitoring can enhance both assessment types (González and Therivel Citation2021). However, the latest peer-reviewed publications, the findings from the European REFIT review and recent Irish research studies all point to the same issue: monitoring remains the forgotten sibling of environmental assessment practice. The findings of the Irish reviews, for example, echo previous studies. The inadequacy of monitoring indicators observed in Irish practice was also observed by Polido et al. (Citation2016). The missed opportunity of monitoring to fill data gaps, address uncertainty and capture unforeseen adverse effects is a shortcoming that was also noted by Azcárate et al. (Citation2013) and Partidário and Fischer (Citation2004). The lack of resources to support SEA monitoring implementation remarked by Irish practitioners aligns with the findings from a previous online international survey (Gachechiladze-Bozhesku and Fischer Citation2012) and the more recent SEA REFIT (EC Citation2019). This particular limitation could potentially be addressed by participative monitoring approaches that rely on citizen science, such as those reviewed by Stepenuck and Green (Citation2015) and Carton and Ache (Citation2017). Such approaches enable distributed data collection over long periods and foster public input and engagement in decision-making by increasing awareness of local issues/concerns to influence decisions (González and Gazzola Citation2019). Nevertheless, the complexities of determining whether any environmental changes occur as a result of a given plan/programme remain, as previously emphasised in the international literature (e.g. Arts Citation1998; Partidario and Fischer, Citation2004; Cherp et al. Citation2012; Fischer and Retief Citation2021). Similarly, the absence of reporting or communication of SEA monitoring results in practice has been repeatedly noted (e.g. Gachechiladze et al. Citation2009; Lundberg et al. Citation2010; González et al. Citation2019).

The international literature has identified issues additional to those highlighted by the Irish case studies. For example, SEA monitoring that relies on existing environmental observation systems is often inappropriate and/or insufficient due to problems with data collection frequencies, scales and compatibilities (Hanusch and Glasson Citation2008; Gachechiladze et al. Citation2009). Previous studies have also highlighted shortcomings in current monitoring practice with regards to the implementation of mitigation measures and the identification/evaluation of unforeseen, emerging and external issues (Hanusch and Glasson Citation2008; Gachechiladze et al. Citation2009; Lundberg et al. Citation2010; Wallgren et al. Citation2011).

Interestingly, many of the practical inefficiencies encountered in other assessment stages (e.g. baseline environment, alternatives, mitigation) are being slowly but surely addressed. This is partially as a result of guidance and ongoing practice, and partially the result of a build-up of legal challenges that have fostered improvements in SEA practice. A review of Court of Justice of the EU (CJEU) case law highlights that SEA and EIA legal challenges to date have mostly focused on the failure to carry assessments and comply with legal requirements concerning the environmental report (EU Citation2020; ECGF Citation2021). Monitoring does not seem to have ‘hit’ the CJEU yet which, in itself, is a sign that this procedural stage and its implementation continue to be overlooked and neglected.

5. Recommendations for good monitoring practice

In an attempt to support better monitoring practice and in order to foster further discussion on this critical environmental assessment sibling, puts forward a set of pragmatic recommendations. These have been gleaned from the literature, interviews with planners and SEA experts, and case studies.

Table 3. Recommendations to improve SEA (and EIA) monitoring practice

6. Conclusion

Monitoring is a key part of SEA, but it remains the forgotten sibling to the other SEA steps that are generally improving worldwide. The enduring poor performance of SEA monitoring significantly reduces the ability of the impact assessment research and practice community to determine whether SEA is resulting in sustainable outcomes and preventing adverse effects on the environment – and thus determine the performance of the pluralist, substantive, normative and knowledge/learning dimensions of SEA effectiveness.

This paper highlights the continuing need for stronger measures to efficiently implement monitoring. The proposed good practice recommendations put forward have an Irish practice focus but are also relevant and transferable to other SEA and planning contexts, and should be useful for improved SEA monitoring in any jurisdiction. At the heart of it is the need for practitioners to focus efforts on monitoring and for plan-makers to commit to implementing monitoring programmes if future plan/programme cycles are to benefit from properly understanding assessment outcomes and environmental pressures and consequences. The establishment of appropriate monitoring systems that respond to monitoring requirements across multiple sectors and planning hierarchies will require investment, but should deliver net benefits in due course by informing assessments that provide for better plans with a reduced risk of unforeseen impacts and need for remedial action.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the environmental protection agency (Ireland) [2017-NCMS-8 and 2019-SE-DS-21].

References