3,128
Views
25
CrossRef citations to date
0
Altmetric
Articles

Using a Delphi study to identify effectiveness criteria for environmental assessment

&
Pages 116-125 | Received 11 Aug 2014, Accepted 06 Nov 2014, Published online: 22 Jan 2015

Abstract

Effectiveness is a long-standing issue in impact assessment (IA) practice and research – the theme is fundamental to the continued development and improvement of IA, and is essential to understanding its impacts on and contributions to environmental management. Understanding effectiveness not only requires attention to the variables that shape the context within which an IA system operates, but it also requires gauges for evaluation. In this paper, we present the results of a Delphi study conducted to develop criteria that can be used in appraisal or audits of IA effectiveness. Involving 55 IA experts, the Delphi resulted in 49 criteria, organized into nine themes. Although the framework of criteria was developed for analysis of the Canadian assessment setting, the criteria can have value to other jurisdictions. The Delphi wove sustainability throughout the criteria rather than set it as a single criterion. While context remains a key theme in evaluation, the Delphi demonstrates that there may be universal qualities that IA should possess if it is to be an effective environmental management tool, and these may transcend context.

Introduction

As a concept, impact assessment (IA) is quite simple – it is a pragmatic tool intended to identify, assess and find ways to mitigate or eliminate the potential negative impacts of proposed projects, policies or plans on the human and biophysical environment. IA has a long history. It is well established, widely practiced and is now arguably a key instrument for environmental management (Morgan Citation2012). Indeed, if longevity and extensive application count, IA may be one of the most successful environmental policy developments of the twentieth century, being utilized in more than 100 countries in its first three decades (Sadler Citation1996); as of 2013, some form of IA is mandated in 191 countries (Pope et al. Citation2013). Originating in the United States, by way of the 1970 National Environmental Policy Act, the expansion of the practice of IA globally resulted in increasingly different perspectives on what effective environmental IA constitutes (Sadler Citation1996). The diversity of opinion on effective IA is not only limited to theory, practice or politics, but it also reflects the diverse nature of geographies that pose unique challenges for IA around the world and the emergence of specializations within the practice (Pope et al. Citation2013). The International Association for Impact Assessment has also worked to advance the field through the development and dissemination of best practices, including methodologies and procedures that may be broadly applied to improve practice and effectiveness (IAIA Citation2014).

There is no lack of examples where IA has helped shape important development decisions. Then again, there are also examples where it has failed to alleviate egregious environmental impacts, although some would argue that it is more often a result of a decision-maker failing to heed the advice and information provided by an IA process. Nevertheless, a key accomplishment of IA is that it has helped ensure that, at the very least, environmental impacts are considered in the development process; however, the impact of that consideration has certainly been variable. It is not surprising then that effectiveness is a long-standing issue in IA research and practice (Chanchitpricha & Bond Citation2013) – the theme is fundamental to the continued development and improvement of IA and essential to understanding its impacts on and contributions to environmental management. Despite the recognition of the importance of effectiveness, the emergence of a well-developed discourse on the subject within the research and practice communities, and a growing number of systematic evaluations of the actual impacts and influence of IA on environmental quality, management and policy, criteria and conceptualization of effectiveness remain important and sometimes contested realms within IA research and practice.

In addition to the challenge of defining effectiveness, there is the ongoing tension, and often confusion, between effectiveness and efficiency. Agencies and scholars have examined the influence of individual IA components, such as public participation (O'Faircheallaigh Citation2010; Glucker et al. Citation2013), monitoring (e.g. Morrison-Saunders & Arts Citation2004) or social learning (e.g. Sinclair et al. Citation2008). Some have looked at how ‘happy’ stakeholders, notably agencies and development proponents, are with IA (e.g. Harrison Citation2006; McCrank Citation2008); these studies, often concerned about efficiency, have tended to focus on key efficiency goals such as streamlining assessment systems, making them easier for those whose undertakings are subject to IA and providing greater certainty with respect to project approval (Morgan Citation2012; Bond et al. Citation2014). However, other researchers (e.g. Morrison-Saunders Citation1996; Morrison-Saunders & Bailey Citation1999; Morrison-Saunders & Arts Citation2004) have specifically considered actual outcomes of IA, its effectiveness as an environmental management and protection tool, and the relationship between assessment and broad environmental management objectives.

In our research, we have sought to better understand the impact of Canadian IA processes (both federal and provincial systems) on environmental management and protection, and provide research that examines the factors that help shape its effectiveness in this regard. While our work continues, one aim is to develop a set of criteria that can be used in the evaluation and audit of IA effectiveness in different settings. To identify criteria, we consulted with a diverse group of experts drawn from public and private sector practice and academic research in Canada and in the European Union (EU) using a Delphi study design. This paper presents these results. In the sections that follow, we first present a brief review of effectiveness in IA within the context of our own research. This is followed by a description of our Delphi approach to defining effectiveness criteria and the criteria resulting from the Delphi exercise. A discussion of the results comes after, and we conclude with implications and further research needs in relation to IA effectiveness.

Effectiveness

The Canadian IA context has undergone significant change in recent years, especially at the federal level. The changes made to Canada's federal environmental assessment process have been outright decried by some (e.g. Doelle Citation2012; Gibson Citation2012) and more optimistically considered by others (e.g. Barnes & Hegmann Citation2013; Horvath & Barnes Citation2013). Improving the effectiveness of assessment was a key theme in Canada's recent federal assessment reforms, although the reforms appear to be more tailored to efficiency goals – reduce review time, narrow the range of projects and activities assessed, and create more certainty with respect to decision outcomes for project proponents. Efficiency and effectiveness do seem to be used interchangeably in the context of recent Canadian IA reform and are often coupled with the mantra of achieving greater speed and certainty. Efficiency and effectiveness in IA are not the same thing, although we do suggest that they can coexist.

The basic assumption, or perhaps the hope, about IA is that it will result in more environmentally sound development and planning decisions and better environmental management and protection than would be the case if it did not exist. But, we cannot really be sure about whether it actually achieves such goals. In a 2005 newsletter of the International Association for IA, the then president Richard Fuggle described a ‘disillusionment’ about IA ‘… and skepticism that impact assessments are contributing to better decisions’. At issue in this sentiment, which is by no means unique, is whether IA is achieving its goal. In other words, is IA really an effective tool for environmental management and protection? The relationship between IA and environmental management has indeed been examined in the scholarly literature (e.g. Bailey & Saunders Citation1988; Bailey Citation1997), but given the emerging and enduring concerns about the effectiveness of IA, the costs and time associated with assessment, and pressure to expedite development, Boyden (Citation2007, p. 3) challenged the international IA community ‘… to better identify the benefits as well as costs of impact assessment’. It is fair to say that this concern has hardly diminished; it is acknowledged more broadly and there has been considerable thought given to how effectiveness can or should be defined, the importance of context, and the evaluation and measurement of the effectiveness of IA processes and outcomes.

A fundamental ideal of IA is that it should be preventative and, as such, is well positioned to be a more effective instrument for protecting the environment from the impacts of development, or at least mitigating them, than many concurrent or reactive environmental management tools (Hollick Citation1981; Gibson Citation1993; Wood Citation1993; Senécal et al. Citation1999; Snell & Cowell Citation2006). As part of advancing this ideal, general guidelines have been put forward for the development of IA practice (Snell & Cowell Citation2006; Duinker & Greig Citation2007; Fitzpatrick & Sinclair Citation2009; Retief et al. Citation2011; Canadian Environmental Assessment Agency Citation2013), its theory and purpose have been examined (Cashmore Citation2004; Chanchitpricha & Bond Citation2013), and strategies for auditing IA performance advanced (Wilson Citation1998). IA has also been assessed with regard to the types of regulations that affect it (Sandham et al. Citation2013), as well as the commonly cited critical factors for implementation (Zhang et al. Citation2013). Identifying the potential consequences of current or proposed actions can also be found in international discussions about appropriate indicators to measure, analyze and assess the potential impact of IA. Discussions of this nature are becoming increasingly common as concepts of sustainability and development impacts are being evaluated from broader, ecosystem-based perspectives (e.g. Moldan et al. Citation2012; Morrison-Saunders & Retief Citation2012; White & Noble Citation2013). As assessments often require public participation, discussions about facilitating meaningful participation and critical education have also gained traction as a means to broach IA effectiveness (Sinclair et al. Citation2008; O'Faircheallaigh Citation2010; Glucker et al. Citation2013). Baker and McLelland (Citation2003), for example, evaluated effectiveness from the perspective of aboriginal involvement, seeing this as a key characteristic of effective and successful IA within a complex land-use and land-claims setting. Other bodies on knowledge, outside the IA field, also offer theories and methods relevant to the evaluation of IA. For instance, Van Doren et al. (Citation2013) draw on planning theory to consider the substantive effectiveness of strategic assessment and develop an approach to evaluation. These conversations about effectiveness often relate to the validity and refinement of the approach to, and practice of, IA.

Sadler (Citation1996, p. ii) suggests that the real test of effectiveness may be whether IA has demonstrably affected decision-making and supported environmental management objectives within the institutional framework that it operates. Rather than setting out specific requirements, Sadler proposed appropriate timing, clear and specific direction, quality information and practice, and receptivity of decision-makers as the four key components that result in effective IA application (Sadler Citation1996). Bond et al. (Citation2013) proposed an alternative framework for IA effectiveness assessment, which reinforces the value of Sadler's (Citation1996) procedural, substantive and transactive qualities, and also sees normative objectives, knowledge and learning, and pluralism as key components. The latter approach builds on the former, while facilitating an understanding, and incorporation, of the diverse opinions or approaches involved in IA, and builds in a component for reflexivity.

Recent reviews seem to suggest that effectiveness of IA is best analyzed at the state level, evaluating the validity and impact within the country-legal framework that it operates (e.g. Appiah-Opoku Citation2001; Pölönen Citation2006; Heinma & Poder Citation2010; Toro et al. Citation2010; Che et al. Citation2011; Pölönen et al. Citation2011). However, specificity of this nature, resulting in the widespread use of a method without broadly applicable measures of effectiveness or necessarily evaluating the success of IA not only as a concept and practice realm, but as an environmental management tool, may be one of the reasons effectiveness discussions continue to be prominent in the field. The introduction of IA as a process that influences decision-making can significantly impact proposed and approved activities. This influence may take the form of the rejection of a proposed activity, the identification of steps for mitigation and the discouragement of the submission of environmentally unsound proposals (Pope et al. Citation2013). Some suggest, however, that in practice and despite expectations, IA has not been able to minimize impacts, avoid irreversible impacts and facilitate sustainable development (Cashmore et al. Citation2004). Other criticisms point out that the benefits of IA are declining due to pressures, or reforms, such as streamlining review processes (Morgan Citation2012; Bond et al. Citation2014).

In light of less than desirable outcomes in some contexts, suggestions for assessment reform have been proposed which adopt a more determinative role for IA in decision-making (Jay et al. Citation2007). The impacts of context, especially professional and organizational cultures (Emmelin Citation1998a, Citation1998b), and the significance of politics and politicization of IAs are increasingly recognized, and some efforts are beginning to outline processes whereby these contextual issues can be integrated in order to advance effectiveness evaluations (Cashmore et al. Citation2010). It is not that effectiveness studies of IA have not been done. Rather, such studies have often focused on procedural effectiveness; that is, the nature and efficiency of IA frameworks (Harrison Citation2006) and the extent to which IA applications integrate such aspects as cumulative effects (Ball et al. Citation2013), or human health (Bhatia & Wernham Citation2008), how the significance of environmental effects is determined (Lawrence Citation2007), or the actual quality of the environmental impact statement (Tzoumis Citation2007). In those instances where attention has been directed toward functional aspects of IA, the focus has been on particular IA dynamics, such as social learning (Diduck et al. Citation2013), consideration of alternatives (Steinemann Citation2001), or follow-up and monitoring (Noble & Storey Citation2005). Effectiveness research needs to bring together these themes, but it needs to do so in a manner that is pragmatic and useable for evaluation and policy audiences, and continues to support what research has noted are the value contributions that IA makes to environmental management and protection (e.g. Bailey & Saunders Citation1988).

Using a Delphi approach to define effectiveness criteria

The Delphi method is a structured, iterative consultation and survey process that typically includes two or more stages or rounds. It employs a systematic and interactive approach to asking a small group of experts in a specific field to forecast an outcome and/or estimate unknowns, for the purpose of generating opinions to help reach decisions (Hung et al. Citation2008). Although commonly used to produce or create consensus (e.g. Graham et al. Citation2003), the original aim was not to do so (Linstone & Turoff Citation2011).

The Delphi has been used in research and practice as a tool to effectively engage diverse stakeholders (Geist Citation2010). Although originally developed and utilized by the RAND Corporation in the 1950s for the purposes of forecasting and prediction, its application has expanded greatly and it is used in many disciplines (Gupta & Clarke Citation1996). The Delphi method has been used to study varied and complex questions, ranging from understanding risk (Webler et al. Citation1991), financial markets (Kauko & Palmroos Citation2014) and social support policies (Yap et al. Citation2014).

The structured approach of the Delphi is generally considered a more accurate approach to forecasting and consensus building than unstructured alternatives (Rowe & Wright Citation1999), although this has been contested (Woudenberg Citation1991). Among criticisms of the Delphi method are that the resulting expert consensus is the product of conformity and group pressure (Woudenberg Citation1991), or potential desirability (Ecken et al. Citation2011), and that participant selection and attrition over the Delphi rounds may result in a nonrepresentative consensus (Bardecki Citation1984). Anonymity and controlled feedback were advocated by early adopters of the method as ways to reduce the potential impacts of group pressure, and careful consideration by the researchers of opinions can ensure that opinions different from the majority are indeed accounted for in the analysis and results (Pill Citation1971). Although techniques such as shared written rationales can inform others of the reasons for taking a certain position, majority opinion may influence the minority (Bolger et al. Citation2011). Thus, defining whom to include as the expert in a Delphi is thus an important consideration (Okoli & Pawlowski Citation2004). The Delphi method can be an effective research approach and adapted for diverse contexts (Landeta Citation2006; Banuls & Turoff Citation2011; Hasson & Keeney Citation2011; von der Gracht Citation2012), such as IA where the field of practice is broad and draws from multiple disciplines and sectors.

Our Delphi application sought the advice of a group of IA experts to help develop criteria for evaluating the effectiveness of an IA system. The Delphi consisted of three stages and was based on one concise question: What criteria should be used to evaluate the effectiveness of an IA system? In the Delphi, we defined the effectiveness of IA as the extent to which it identifies, assesses, and finds ways to mitigate or eliminate the potential negative impacts of development, and importantly how well it helps protect or improve environmental management and ultimately the state of the environment. The question was open-ended and required text or qualitative responses whereby participants were asked to propose, and define, effectiveness criteria. We used the term ‘impact assessment’ to capture the range of assessment processes, including environmental, strategic and social, and adopted a broad definition of environment including both the biophysical and human realms (social, cultural and economic).

The expert group was identified by selecting key researchers in the field, key government and public agency personnel, and a list of the EU national experts at the time. We chose to include a group from the EU because the diversity of systems and multiple interactions has relevance to the Canadian experience and because connections between IA research and practice are well developed in the EU. Although situated in different jurisdictional contexts, the make-up of the group was also helpful in overcoming potential language and culture challenges that researchers have noted can affect the performance of a Delphi study (e.g. Hung et al. Citation2008).

The Delphi group embodied a key characteristic of IA practice – it is an interdisciplinary or multidisciplinary field that draws knowledge broadly from the sciences and social sciences, and practitioners span the public and private sectors and academe. We invited 214 people to participate (100 from Canada and 114 from the EU). Fifty-five accepted and participated in the first round of the Delphi, and at the end of the third round, there were 44 Delphi members. The attrition level is considered modest. Member's names were not shared within the Delphi group and we assured anonymity when presenting the results. Participants were split relatively equally among government (national, regional and provincial), academic, and private sector and international or transnational organizations (e.g. the European Commission and the United Nations). About 38% were from government and international organizations, 28% academic and 23% private sector. There were no conservation or environmental organization participants. This reflects the nature of the original invitation list, which was mostly government, academe and private sector practitioners, and a reality that we encountered where nongovernmental organizations tended only to interact with an assessment process on a project or issue basis; there were few such organizations with an ongoing and broad interest in the assessment process. The Delphi is also a method that is to a great extent dependent on those willing to participate and devote time to the process. However, some respondents reported that they also worked for such organizations, but not on a full-time basis. Some academic and private sector respondents also noted that their work crossed sector boundaries (e.g. a consultant who also works as a sessional lecturer or an academic doing contract work for a consulting firm).

A pretest of both the stage 1 Delphi question and the working definition of effectiveness was also conducted. The pretest readers suggested that IA be approached as a linked system and thus the Delphi should be open to comments relevant to system-level qualities (process/procedural aspects) and their links to decisions (substantive outcomes). The pretest readers also suggested that comments on project or plan level of application be encouraged, and that environmental management be conceptualized broadly and include protection and improvement. We did not explicitly include process efficiency, but aspects of this quality, or objective, did emerge throughout the Delphi and are evident in some of the resulting criteria.

Responses were collected using an online survey design tool accessible only to the participants. The criteria proposed by the group were qualitatively analyzed by four researchers at the end of each stage to identify both unique and common themes. The resulting list was then circulated back to the respondents for refinement and change. Further analysis and modification followed until the final list was developed. Minor editing and refinement of wording was made to the final list based on a review of comments from the Delphi group across the stages. The process was a modified Delphi – it sought relative consensus, which means that some of the criteria reflect a majority agreement. The process took 18 months to complete.

Criteria for effective IA

At the end of stage 1, there were 25 broad criteria identified by our Delphi participants; by the end of stage 2 this had been refined to 16 themes; and after stage 3 we were able to derive a final list of nine criteria themes. Some criteria were specific, and some had multiple components. In stages 2 and 3, there were instances of individual change, where a participant would propose a criterion in an earlier stage, and then ask that it be removed later, or reject the refinement or rewording. This supports the value of the Delphi as a place where ideas can be not only identified and refined, but changed or even rejected, by those who initially proposed them, as participants are exposed to other ideas, participate in argument or see the notion juxtaposed with the contributions of others. It is a reflexive process capable of also serving both knowledge building for a community of practice and collaborative learning functions.

The final criteria are grouped under nine criteria themes – a framework for evaluation of effective IA. These are outlined below. In the last stage, we asked participants to identify the four themes they thought were the most important. We did not ask them to rank these. The four most commonly identified effectiveness criteria, by number of votes, were stakeholder confidence, integrative and linked to decision-making, promotes long-term substantive gains in environmental quality and comprehensiveness. Some participants noted in their comments that while they valued ‘strong’ participatory qualities, they believed that these should also be considered a natural asset for building confidence in the IA process or system, or that participatory qualities were an aspect of one or more of the other criteria they chose.

1. Stakeholder confidence

  • The IA process is known by stakeholders to be objective, and there is confidence that other processes do not predetermine the IA decision.

  • The process is understood by stakeholders, and information about the process, proceedings and its authority is accessible and clear.

  • The intent of the process is acknowledged and clearly stated, whether it is to advise, decide, or to only identify baseline conditions and determine impacts.

  • There is confidence that major projects or powerful proponents cannot circumvent the process (see also 4(b),(c)).

2. Integrative and linked to approval decision-making

  • The results of the IA process are clearly accounted for in the decision (the eventual approval, rejection or approval with conditions).

  • The process demonstrably informs, and the results are integrated into, other subsequent or coincident environmental approval and review processes.

  • There is capacity to integrate the knowledge and results of other processes into the IA process without unduly influencing its outcomes (see also 1(a)).

  • An initiative may not proceed through other approval processes or receive other approvals until the IA process is complete and the initiative approved.

  • The process considers impacts beyond the immediate time scale of the policy, plan or program, when applying strategic assessment.

3. Promotes betterment and longer-term and substantive gains to environmental management and protection

  • The IA process and its outcomes minimize or eliminate adverse environmental effects that may result from the initiative.

  • The process seeks betterment of the environment, when possible, by ensuring net benefits to the environment.

  • The process seeks to identify and sustain social and biophysical systems that interact and may be affected by assessment-subject activities.

  • The process prevents imposition of significant adverse effects onto future generations.

  • There is mandatory follow-up and monitoring, including a supporting audit and public reporting system to ensure compliance with approval conditions.

  • The process provides follow-up provisions to assess the efficacy of mitigation requirements and reports on environmental benefits (e.g. provision of compliance schedules, mitigation reports and post implementation audits, evaluation of immediate and longer-term gains to environmental management and protection).

4. Comprehensiveness

  • The definition of ‘environment’ and ‘environmental effects’ encompasses social/cultural and ecological/biophysical factors and their interrelationships at multiple scales.

  • IA is applied to the range of initiatives/activities that significantly affect the environment, whether the proponent is from the public or private sector (see also 1(d)).

  • Initiatives may be ‘screened out’ (exempted from IA) if there is sufficient information to determine that impacts are insignificant, or otherwise addressed by an alternative process, but listed exemptions are limited to emergency or similar initiatives (e.g. urgent flood control works).

  • There is a mandatory scoping stage that occurs early in the assessment process to focus the assessment on key issues and identify opportunities for environmental protection and improvement, and there is opportunity to deal with new information or issues identified throughout the assessment process or during project implementation (see also 3(e),(f) and 9(e)).

  • The process requires identification and reasonable consideration of alternatives, including ‘alternatives to’ the initiative and ‘alternative means’ of carrying out or implementing the initiative.

  • The process assesses cumulative effects.

5. Evidence-based

  • The decisions that follow the IA process clearly and directly reflect the evidence presented in the assessment and/or review proceedings, and the process is open to hearing and considering all relevant and opposing evidence.

  • Uncertainties and assumptions about data, system behaviors and future conditions are disclosed and acknowledged in the decision.

  • Impact predictions are formulated in such a way that they can be tested or used for follow-up.

  • The process requires monitoring and follow-up, and the data and reporting from those activities are made publically accessible and retained for use in subsequent assessments and decision-making processes.

6. Accountability

  • There is a requirement for regular, independent public review of the assessment system, its performance and effectiveness (e.g. a five-year review of process, legislation and regulations).

  • Documentation and information disclosure requirements are binding on the process and its administrators, proponents and all other stakeholders.

  • There is open and easy access to timely, accurate and full and complete information early and throughout the assessment process through formats that provide extensive access and acknowledge different forms of access need (multiple formats are used: electronic, print, languages, verbal and other).

  • The process is independent, and where needed multidisciplinary organizations exist to hear requests for exemptions and inclusions, conduct hearings (when they are required) and review assessment documents and reports.

  • Roles and responsibilities in the assessment, review and decision-making processes are clearly identified.

  • Roles and responsibilities for post-IA, including implementation of the initiative and follow-up on mitigation and monitoring and reporting, are clearly identified.

7. Participation

  • There is a requirement for stakeholder participation throughout the process.

  • Participation opportunities are made well known, while recognizing that such engagement will vary in scale and method according to the nature and scale of the initiative being assessed, the stage of the process and the social–cultural context.

  • Sufficient resources and time are provided to support participation.

  • The participation approach is designed to improve the quality of the proposal, affect the assessment and influence the decision.

  • There is a requirement to broadly consider, use and respect multiple forms of knowledge where applicable and available (e.g. scientific, applied-technical, aboriginal, local and culture-specific).

  • Hearings and other similar deliberations are open to the public and there are no unjustified limitations to open deliberation and presentation of evidence (whether through the imposition of place, time of day, time allowed, insufficient resources, or cultural or social barriers, or other unwarranted limitations).

  • There is a requirement to publically report on stakeholder engagement, including how it was undertaken and what was said, and how it was accounted for in the assessment and decision.

  • There is a requirement to explain how participation was accounted for in the decision.

8. A legal foundation for IA

  • IA must be codified in law.

  • The legal foundation for IA must provide clarity for stakeholders with respect to applicability, assessment requirements, disclosure requirements, and process components, reporting and decision-making.

  • The process contains a legal base for participation and accountability requirements.

  • The process must outline provisions for enforcement and addressing with noncompliance with assessment requirements or subsequent decisions.

  • The IA system must provide decisions (for approvals, conditions, rejections, exemptions and inclusions) that may be appealed by stakeholders or other affected parties based on questions of process veracity or interpretation of law.

9. Capacity and innovation

  • The IA process must be administered by competent and impartial authorities with sufficient staffing, skills and qualifications to administer the process, and to review and evaluate technical, social and scientific data.

  • The process must provide sufficient financial resources to review agencies to ensure the integrity, effectiveness of, and confidence in, the process.

  • Mechanisms exist in the process for the early consideration of assessment-subject initiatives and the provision of advice to proponents.

  • Information accessibility and participation are enhanced by the use of innovative technologies and formats for communication, stakeholder capacity building and information access.

  • The process and the supporting institutional framework are flexible, adaptive, and open to new and innovative tools and approaches to assessment and evaluation.

Discussion

After the criteria were derived, we were interested in knowing the extent to which the criteria resulting from our process would differ from or reflect effectiveness themes already present in the IA literature. With some notable exceptions (e.g. Sadler Citation1996), effectiveness criteria have commonly been based on opinion, albeit well-informed, reflective and experiential ones. Using the Delphi to generate such criteria, however, produces a relative consensus that is the outcome of the practice and research experience of group members, tiered and interactive consultation, and deliberative argument.

Some of the criteria that emerged from our study do overlap with parts of existing scholarly lists (e.g. Gibson Citation1993; Sadler Citation1996; Senécal et al. Citation1999; Wood Citation2003); indeed, some participants noted that they were reflecting on ideas from such sources, or from their own writings. The reiteration of some criteria reflects the strength and established nature of key ideas and tenets of assessment practice. But in our results, several differences do emerge. The ranking of criteria themes indicated a preference for four IA qualities: stakeholder confidence, integrative and linked to decision-making, promotes long-term substantive gains in environmental management and protection, and comprehensiveness – this provides some indication of priorities in conceptualizing effectiveness, at least for this Delphi group.

The Delphi group identified participation as a criterion for effective IA, and it is a prevalent theme in other criteria lists (e.g. Gibson Citation1993; Sadler Citation1996), but the majority of participants chose not to include it as one of the four most important effectiveness themes. Perhaps, the expectation of a participatory process may be relatively well advanced in democratic developed countries, which helps in the evaluation of Canadian IA, but there are many nations without substantive public participation mechanisms built into their IA systems. We support the contention that an effective IA process includes public participation mechanisms, and at a basic level these should meet the criteria noted here. The design and implementation of such tools need not mirror those prevalent in western settings, but should respond to specific social–cultural settings while providing the best opportunities for hearing from those impacted by development.

The role of stakeholder confidence is perhaps a unique theme in this study, although hints of this criterion are observable in other rubrics – for example, Wood's (Citation2003) criterion ‘where the benefits of assessment are seen to outweigh the financial and time costs’. Regardless of the extent to which the themes and criteria outlined here reflect recognized criteria, the Delphi provided a setting within which ideas were debated and modified, or even removed, by colleagues through stages of discussion. The resulting Delphi criteria presented here are also aimed at evaluation and audit application, which gives them perhaps a more detailed quality than what may be present in other, more broadly worded criteria or principles.

Context

Two specific issues emerged from our Delphi that are important to understanding and advancing effectiveness criteria for IA evaluation. The first concerns the role and importance of context. Given the importance of context in discussions of IA effectiveness, it is not surprising that context was a theme that emerged in the Delphi. In recent years, both effectiveness and context have emerged as important ideas in evaluating IA systems and outcomes. The general argument about context is that what is effective in one context or in one regulatory system, or resource sector, may not be considered so under another. Furthermore, understandings and interpretations of effectiveness may vary from the proponent to the regulator to the general public (Hilding-Rydevik & Bjarnadóttir Citation2007). In this study, context emerged as a quality that certainly shapes effectiveness. However, while context is important for understanding the reasons that IA may or may not improve environmental management, advance sustainability or be better connected to decision-making, it is not an excuse for a less than effective IA system – nor does it preclude the evaluation or audit of IA processes for effectiveness using generically accepted evaluation criteria.

We agree that gauging effectiveness is possible and essential, and that such evaluation must also acknowledge contextual variables such as institutional and organizational cultures, the values and doctrines of professions, and political–jurisdictional qualities. Methodologically, these are qualities that are also relevant to participants in this study. However, regardless of context, we propose that there are universal qualities that IA must possess if it is to be effective as an environmental management tool, and the criteria we provide here is a starting point for articulating and evaluating those qualities. For some Delphi participants, the process resulted in reflection on the qualities of the IA systems in their jurisdiction. One Delphi member commented that, after reviewing the criteria, they ‘could only conclude that their IA process was ineffective’, another noted that if they ‘applied the resulting criteria to their system it would fail the effectiveness test’. It was evident in the responses that there was a clear struggle with the articulation of criteria that could be broadly applied rather than a set that was contextually certain.

Sustainability

A second issue concerns how sustainability, or contributions to sustainability, can be addressed in IA effectiveness criteria. We did not ask participants to comment on sustainability, but it emerged as a theme in stage 1. It is an understatement to say that sustainability is an important term in environmental management. Nonetheless, despite the wide adoption of sustainability as a semantic ingredient in the objectives, mission statements and mandates of a range of organizations that deal with IA, it is also seen as lacking clear meaning, possessing conflicting qualities and being difficult to put into practical use (White & Noble Citation2013). However, Cashmore et al. (Citation2004) propose that the contribution of IA to sustainable development, while unknown for sure, is perhaps more than may be typically assumed. The contributions are indirect, but through stakeholder involvement and the subsidiary impacts of IA on institutional actors (government, industry and science), the process can shape development outcomes by influencing causal pathways to decision-making. This impact is evidently more coincident than deliberate (Cashmore et al. Citation2004).

Some in the Delphi group viewed sustainability as an important word that should be acknowledged, and others saw it as a difficult notion needing substance, an understanding of institutional settings, clear pathways of influence and discernable operational characteristics. Other Delphi members emphasized the importance of defining the sustainability in a more operational way, while some wanted it dropped altogether rather than repeating ‘commonly used definitions’ and ‘prosaicisms’ associated with the concept, as two members noted in their comments. At the end of Delphi stage 1, there was a stand-alone sustainability criterion. Sustainability has been linked to IA in no small part because sustainability may be an idea in search of a practical means of implementation; IA is seen as offering that attribute. There is also an emerging sustainability assessment field that has sought to formalize the concept within IA practice and conceptual frameworks (Bond et al. Citation2012; Morrison-Saunders & Retief Citation2012). However, the functional qualities of IA cannot compensate for the ethereal nature of sustainability. Because of imprecise and sometimes contradictory interpretations of its meaning, in subsequent Delphi stages the group moved away from a single sustainability criterion and instead integrated what may be its most tangible quality, a holistic and necessarily linked perspective on environment and institutional arrangements, into several of the proposed evaluation criteria – integrative, comprehensiveness and promoting betterment.

The Delphi facilitated the adaptation of arguments to incorporate sustainability across several criteria rather than either discarding it, or seeing it as a stand-alone measure. The integrative and linked nature of sustainability and its broad qualities may be better served by incorporating the palpable objectives of sustainability into multiple criteria. Regardless of the challenge in accommodating a theme that can be equivocal, the Delphi was able to find a balance and articulate qualities that are pragmatic and relevant to practice and evaluation, indicate the greater need and potential of IA to contribute to sustainability by using and shaping causal pathways, and at the same time be reflective of the broad objectives of sustainability.

Conclusions and further research needs

This paper presented an expert-based set of criteria for analyzing the effectiveness of IA, derived based on a Delphi process. Environmental assessment should be a vital public policy instrument for ensuring good environmental management, and if applied with sincerity it can improve the quality of development decisions and help decision-makers avoid unwarranted impacts. The criteria reflect a combination of operational, process and institutional factors that shape the impact and effectiveness of IA application and practice. The inference may be that the effectiveness of procedures and processes, including the institutional environment in which those procedures and processes operate, is a substantive part of ensuring that IA is an effective tool for environmental management. The resulting framework helps advance the discussion of effectiveness and provides a foundation for the evaluation of both operational and conceptual–theoretical qualities of IA practice and implementation, and environmental management outcomes and effectiveness.

Although we did not set out to explicitly test the Delphi as a research instrument, in our experience the Delphi functioned as a valuable tool for developing criteria for evaluation and audit of processes and systems, and for reaching relative consensus about attribute, objectives and broad metrics. In this study, there were changes in opinion, and the rethinking of individual stances and positions among the stages, which can be an indicator of reflective thinking and social learning emerging as participants saw their contributions merged and modified, and had the opportunity to do the same with the ideas presented by others. But this potential quality also runs the risk of focusing on those qualities that can be broadly agreed upon while possibly omitting important but less generally accepted criteria. For practitioners, the Delphi can be an important tool in IA research and evaluation for understanding the qualities of process, identifying regulatory change needs, predicting system impacts and performance issues, and forecasting the likely outcomes and environmental management contributions of policy or process reforms.

The study also meets, in part, the intended outcome of providing criteria (grouped under nine criteria themes) for use in the evaluation, and audit, of IA processes and the criteria may be framed as evaluation or audit questions. The next phase of this research will be to develop measures or metrics for each of the proposed criteria, which can be used in the evaluation or auditing of IA effectiveness. While there is a need to develop qualitative and quantitative evaluation approaches, addressing the methodological and analytical needs of evaluation is more challenging for some of the criteria than for others. This suggests the need to also draw on the theories and methods advanced by fields outside IA, such as for policy analysis, planning or program evaluation. Regardless, there may be universal qualities that IA should possess if it is to be an effective environmental management tool, and these could well transcend context. This study demonstrates that there is the potential to develop general criteria for IA effectiveness. IA in many jurisdictions is being reconsidered in its present form as decision-makers and other stakeholders seek to evaluate the role of IA in development processes, planning and environmental management. The objectives of these deliberations may be environmentally altruistic, or not, but if anything these pressures reinforce the importance of understanding and demonstrating the effectiveness of IA as an essential part of environmental management and protection.

Acknowledgements

The authors thank the Delphi study participants for their valuable contributions, and especially their time and patience. The advice and insight of the reviewers and IAPA editors are also greatly appreciated.

Additional information

Funding

The Social Sciences and Humanities Research Council of Canada provided funding for this research. This support is gratefully acknowledged.

References

  • Appiah-OpokuS. 2001. Environmental impact assessment in developing countries: the case of Ghana. Environ Impact Assess Rev.21:59–71.
  • BaileyJ. 1997. Environmental impact assessment and management: an under-explored relationship. Environ Manage.21(3):317–327.
  • BaileyJ, SaundersA. 1988. Ongoing environmental impact assessment as a force for change. Project Appraisal.2:37–42.
  • BakerD, McLellandJ. 2003. Evaluating the effectiveness of British Columbia's environmental assessment process for First Nations’ participation in mining development. Environ Impact Assess Rev.23(5):581–603.
  • BallM, NobleBF, DubeMG. 2013. Valued ecosystem components for watershed cumulative effects: an analysis of environmental impact assessments in the South Saskatchewan River Watershed, Canada. Integr Environ Assess Manage.9(3):469–479.
  • BanulsVA, TuroffM. 2011. Scenario construction via Delphi and cross-impact analysis. Technol Forecast Social Change.78:1579–1602.
  • BardeckiMJ. 1984. Participants’ response to the Delphi method: an attitudinal perspective. Technol Forecast Social Change.25:281–292.
  • BarnesJ, HegmannG. 2013. It's not the end of EA in Canada! Paper presented at: International Association for Impact Assessment Conference; Calgary, Alberta.
  • BhatiaR, WernhamA. 2008. Integrating human health into environmental impact assessment: an unrealized opportunity for environmental health and justice. Environ Health Perspect.116(8):991–1000.
  • BolgerF, StranieriA, WrightG, YearwoodJ. 2011. Does the Delphi process lead to increased accuracy in group-based judgment or does it simply induce consensus amongst judgment forecasters?Technol Forecast Social Change.78:1671–1680.
  • BondA, Morrison-SaundersA, HowittR. 2013. Framework for comparing and evaluating sustainability assessment practice. In: BondA, Morrison-SaundersA, HowittR, editors. Sustainability assessment: pluralism, practice and progress. New York, NY: Taylor and Francis.
  • BondA, Morrison-SaundersA, PopeJ. 2012. Sustainability assessment: the state of the art. Impact Assess Project Appraisal.30(1):53–62.
  • BondA, PopeJ, Morrison-SaundersA, RetiefF, GunnJAE. 2014. Impact assessment: eroding benefits through streamlining?Environ Impact Assess Rev.45:46–53.
  • BoydenA. 2007. Environmental assessment under threat. Fargo, ND: International Association for Impact Assessment Newsletter.
  • Canadian Environmental Assessment Agency. 2013. Operational policy statement: assessing cumulative environmental effects under the Canadian Environmental Assessment Act, 2012. Ottawa: Canadian Environmental Assessment Agency.
  • CashmoreM. 2004. The role of science in environmental impact assessment: process and procedure versus purpose in the development of theory. Environ Impact Assess Rev.24:403–426.
  • CashmoreM, GwilliamR, MorganR, CobbD, BondA. 2004. The interminable issue of effectiveness: substantive purposes, outcomes and research challenges in the advancement of environmental impact assessment theory. Impact Assess Project Appraisal.22(4):295–310.
  • CashmoreM, RichardsonT, Hilding-RyedvikT, EmmelinL. 2010. Evaluating the effectiveness of impact assessment instruments: theorising the nature and implications of their political constitution. Environ Impact Assess Rev.30:371–379.
  • ChanchitprichaC, BondA. 2013. Conceptualizing the effectiveness of impact assessment processes. Environ Impact Assess Rev.43:65–72.
  • CheX, EnglishA, LuJ, ChenYD. 2011. Improving the effectiveness of planning EIA (PEIA) in China: integrating planning and assessment during the preparation of Shenzhen's master urban plan. Environ Impact Assess Rev.31:561–571.
  • DiduckAP, PratapD, SinclairAJ, DeaneS. 2013. Perceptions of the impacts, public participation and learning in the planning, assessment and mitigation of two hydroelectric projects in Uttarakhand, India. Land Use Policy. 33(1):170–182.
  • DoelleD. 2012. CEAA 2012: the end of federal EA as we know it?J Environ Law Pract.24(1):1–17.
  • DuinkerPN, GreigLA. 2007. Scenario analysis in environmental impact assessment: improving explorations of the future. Environ Impact Assess Rev.27:206–219.
  • EckenP, GnatzyT, von der GrachtHA. 2011. Desirability bias in foresight: consequences for decision quality based on Delphi results. Technol Forecast Social Change.78:1654–1670.
  • EmmelinL. 1998a. Evaluating environmental impact assessment systems. Part 1: theoretical and methodological considerations. Scand Housing Plan Res.15(3):129–148.
  • EmmelinL. 1998b. Evaluating environmental impact assessment systems. Part 2: professional culture as an aid in understanding implementation. Scand Housing Plan Res.15(4):187–209.
  • FitzpatrickP, SinclairAJ. 2009. Multi-jurisdictional environmental impact assessment: Canadian experiences. Environ Impact Assess Rev.29:252–260.
  • GeistMR. 2010. Using the Delphi method to engage stakeholders: a comparison of two studies. Eval Program Plan.33:147–154.
  • GibsonRG. 1993. Environmental assessment design: lessons from the Canadian experience. Environ Prof.15:12–24.
  • GibsonRB. 2012. In full retreat: the Canadian government's new environmental assessment law undoes decades of progress. Impact Assess Project Appraisal.30(3):179–188.
  • GluckerAN, DriessenPPJ, KolhoffA, RunhaarHAC. 2013. Public participation in environmental impact assessment: why, who and how?Environ Impact Assess Rev.43:104–111.
  • GrahamB, RegehrG, WrightJG. 2003. Delphi as a method to establish consensus for diagnostic criteria. J Clin Epidemiol.56:1150–1156.
  • GuptaUG, ClarkeRE. 1996. Theory and applications of the Delphi technique: a bibliography (1975–1994). Technol Forecast Social Change.53:185–211.
  • HarrisonC. 2006. Industry perspectives on barriers, hurdles, and irritants preventing development of frontier energy in Canada's Arctic islands. Arctic.59(2):238–242.
  • HassonF, KeeneyS. 2011. Enhancing rigour in the Delphi technique research. Technol Forecast Social Change.78:1695–1704.
  • HeinmaK, PoderT. 2010. Effectiveness of environmental impact assessment system in Estonia. Environ Impact Assess Rev.30:272–277.
  • Hilding-RydevikT, BjarnadóttirH. 2007. Context awareness and sensitivity in SEA implementation. Environ Impact Assess Rev.27(7):666–684.
  • HollickM. 1981. EIA and environmental management in Western Australia. Environ Impact Assess Rev.2(1):116–120.
  • HorvathC, BarnesJ. 2013. EA in Canada: out of the frying pan, into the fire. Paper presented at: International Association for Impact Assessment Conference; Calgary, Alberta.
  • HungH-L, AltschuldJW, LeeY-F. 2008. Methodological and conceptual issues confronting a cross-country Delphi study of education program evaluation. Eval Program Plan.31:191–198.
  • International Association for Impact Assessment [IAIA]. 2014. Available from: http://www.iaia.org/about/.
  • JayS, JonesC, SlinnP, WoodC. 2007. Environmental impact assessment: retrospect and prospect. Environ Impact Assess Rev.27:287–300.
  • KaukoK, PalmroosP. 2014. The Delphi method in forecasting financial markets: an experimental study. Int J Forecast.30:313–327.
  • LandetaJ. 2006. Current validity of the Delphi method in social sciences. Technol Forecast Social Change.73:467–482.
  • LawrenceD. 2007. Impact significance determination: back to basics. Environ Impact Assess Rev.27(7):755–769.
  • LinstoneHA, TuroffM. 2011. Delphi: a brief look backward and forward. Technol Forecast Social Change.78:1712–1719.
  • McCrankN. 2008. Road to improvement: the review of regulatory systems across the North. Ottawa: Indian and Northern Affairs Development Canada.
  • MoldanB, JanouskovaS, HakT. 2012. How to understand and measure environmental sustainability: indicators and targets. Ecol Indic.17:4–13.
  • MorganRK. 2012. Environmental impact assessment: the state of the art. Impact Assess Project Appraisal.30(1):5–14.
  • Morrison-SaundersA. 1996. Environmental impact assessment as a tool for ongoing environmental management. Project Appraisal.11:99–104.
  • Morrison-SaundersA, ArtsJ. 2004. Assessing impact: handbook of EIA and SEA follow-up. London: Earthscan.
  • Morrison-SaundersA, BaileyJ. 1999. Exploring the EIA/environmental management relationship. Environ Manage.24(3):281–295.
  • Morrison-SaundersA, RetiefF. 2012. Walking the sustainability assessment talk – progressing the practice of environmental impact assessment (EIA). Environ Impact Assess Rev.36:34–41.
  • NobleB, StoreyK. 2005. Toward increasing the utility of follow-up in Canadian EIA. Environ Impact Assess Rev.25(2):163–180.
  • O'FaircheallaighC. 2010. Public participation and environmental impact assessment: purposes, implications and lessons for public policy making. Environ Impact Assess Rev.30:19–27.
  • OkoliC, PawlowskiSD. 2004. The Delphi method as a research tool: an example, design considerations and applications. Inf Manage.42:15–29.
  • PillJ. 1971. The Delphi method: substance, context, a critique and an annotated bibliography. Socio-Econ Plann Sci.5:57–71.
  • PölönenI. 2006. Quality control and the substantive influence of environmental impact assessment in Finland. Environ Impact Assess Rev.26:481–491.
  • PölönenI, HokkanenP, JalavaK. 2011. The effectiveness of the Finnish EIA system– what works, what doesn't, and what could be improved?Environ Impact Assess Rev.31:120–128.
  • PopeJ, BondA, Morrison-SaundersA, RetiefF. 2013. Advancing the theory and practice of impact assessment: setting the research agenda. Environ Impact Assess Rev.41:1–9.
  • RetiefF, WelmanCNJ, SandhamL. 2011. Performance of environmental impact assessment (EIA) screening in South Africa: a comparative analysis between the 1997 and 2006 EIA regimes. South Afr Geogr J.93(2):154–171.
  • RoweG, WrightG. 1999. The Delphi technique as a forecasting tool: issues and analysis. Int J Forecast.15:353–375.
  • SadlerB. 1996. International study of the effectiveness of environmental assessment: environmental assessment in a changing world. Canadian Environmental Assessment Agency and International Association for Impact Assessment, Final Report.
  • SandhamLA, van HeerdenAJ, JonesCE, RetiefFP, Morrison-SaundersA. 2013. Does enhanced regulation improve EIA report quality? Lessons from South Africa. Environ Impact Assess Rev.38:155–162.
  • SenécalP, SadlerB, GoldsmithB, BrownK, ConoverS. 1999. Principles of environmental impact assessment, best practice. Fargo, ND: International Association for Impact Assessment; Lincoln, UK: Institute of Environmental Assessment.
  • Sinclair AJ, DiduckA, FitzpatrickP. 2008. Conceptualizing learning for sustainability through environmental assessment: critical reflections on 15 years of research. Environ Impact Assess Rev.28:415–428.
  • SnellT, CowellR. 2006. Scoping in environmental impact assessment: balancing precaution and efficiency?Environ Impact Assess Rev.26(4):359–376.
  • SteinemannA. 2001. Improving alternatives for environmental impact assessment. Environ Impact Assess Rev.21:3–21.
  • ToroJ, RequenaI, ZamoranoM. 2010. Environmental impact assessment in Colombia: critical analysis and proposals for improvement. Environ Impact Assess Rev.30:247–261.
  • TzoumisK. 2007. Comparing the quality of draft environmental impact statements by agencies in the United States since 1998 to 2004. Environ Impact Assess Rev.27(1):26–40.
  • Van DorenD, DriessenP, SchijfB, RunhaarH. 2013. Evaluating the substantive effectiveness of SEA: towards a better understanding. Environ Impact Assess Rev.38:120–130.
  • von der GrachtHA. 2012. Consensus measurement in Delphi studies – review and implications for future quality assurance. Technol Forecast Social Change.79:1525–1536.
  • WeblerT, LevineD, RakelH, RennO. 1991. A novel approach to reducing uncertainty. Technol Forecast Social Change.39:253–263.
  • WhiteL, NobleBF. 2013. Strategic environmental assessment for sustainability: a review of a decade of academic research. Environ Impact Assess Rev.42:60–66.
  • WilsonL. 1998. A practical method for environmental impact assessment audits. Environ Impact Assess Rev.18:59–71.
  • WoodC. 1993. Environmental impact assessment in Victoria: Australia discretion rules in EA!J Environ Manage.39(4):281–295.
  • WoodC. 2003. Environmental impact assessment: a comparative review. Harlow: Pearson Education.
  • WoudenbergF. 1991. An evaluation of Delphi. Technol Forecast Social Change.40:131–150.
  • YapMBH, PilkingtonPD, RyanSM, KellyCM, JormAF. 2014. Parenting strategies for reducing the risk of adolescent depression and anxiety disorders: a Delphi consensus study. J Affective Disord.156:67–75.
  • ZhangJ, KornovL, ChristensenP. 2013. Critical factors for EIA implementation: literature review and research options. J Environ Manage.114:148–157.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.