16,997
Views
24
CrossRef citations to date
0
Altmetric
Articles

Good practices for environmental assessment

, &
Pages 238-254 | Received 04 Sep 2014, Accepted 15 Jun 2015, Published online: 18 Sep 2015

Abstract

Environmental assessment (EA) has emerged in the last five decades as one of the primary management tools that governments use to protect the environment. However, despite substantial theoretical development and practical experience, there are concerns that EA is not meeting its objectives. This article develops a set of good practices to improve EA. An integrated list of proposed good practices is developed based on a literature review of impact assessment research and related fields of study. The practices are then evaluated by surveying experts and practitioners involved in EA of tar sands (also known as oil sands) development in Canada. In all, 74 practices grouped under 22 themes are recommended to improve EA. Key unresolved issues in EA requiring future research are identified.

1. Introduction

Environmental assessment (EA) has emerged in the last five decades as one of the primary management tools for protecting the environment. However, despite substantial theoretical development and practical experience, there are concerns that EA is not meeting its objectives of protecting the environment and ensuring sustainability (Jay et al. Citation2007; Noble Citation2009; Gibson & Hanna Citation2009).

To address these concerns and improve EA performance, scholars and practitioners have proposed various best practices (e.g., Senecal et al. Citation1999; Wood Citation2003; Gibson et al. Citation2005; Vanclay et al. Citation2015). While these efforts have considerably advanced EA knowledge and practice, there remain deficiencies in the best practices literature that need to be addressed. First, there are key EA design issues, such as the appropriate structure of EA appeal processes, that are not sufficiently canvassed in the existing literature. Second, much of the existing literature is based mainly upon author observation and inference without a clear explanation of how the practices were derived and without sufficient evidence supporting the practices. The principles published by the International Association of Impact Assessment (IAIA) (Senecal et al. Citation1999; Vanclay Citation2003; Andre et al. Citation2006; Vanclay et al. Citation2015) are notable exceptions in that there is a description of how the practices were formulated and validated by soliciting input from EA experts, but the IAIA publications do not provide a transparent summary of the input received and how it was used. Third, the best practice literature does not adequately address contextual factors that may influence when and how the proposed practices should be applied in specific settings. Fourth, best practice recommendations are based largely on the EA literature and could be better informed by relevant literature in related fields. Fifth, best practice recommendations have not been sufficiently tested to verify links between processes and desired outcomes. Establishing causal connections between practices and substantive outcomes is an important aspiration of research on EA effectiveness (Sadler Citation1996; Cashmore et al. Citation2004).

Our objective in this article is to address some of these deficiencies in best practice literature by reviewing, extending, and validating recommended EA best practices to produce a more comprehensive and empirically grounded basis for improving EA. We begin by outlining a methodology for developing best practices that integrates recommended best practices from the EA literature with related fields of study, and then test proposed practices by surveying experts and practitioners involved in EA of tar sands (also known as oil sands) development in Canada. We then present 74 recommended practices grouped under 22 themes that we term “good practices” for EA. We use the term good practices instead of best practices to reflect the limited amount of empirical validation of practices with respect to outcomes, and the likelihood of further refinement of the practices with additional research (Bardach Citation2004; Vesely Citation2011). Finally, we review limitations of our research and provide recommendations for future best practices research.

2. Methods

Our research method was designed to help advance the field of EA by exploring gaps in EA process guidance, conducting additional testing and validation of good EA practices, and integrating relevant guidance from related fields.

The first step was to review existing literature on EA best practices. Overviews of EA best practices are readily available in EA books and reports, IAIA best practice guidelines, and articles in EA journals (e.g., Senecal et al. Citation1999; Wood Citation2003; ICPGSIA Citation2003, Lawrence Citation2003; Gibson et al. Citation2005; Andre et al. Citation2006; Noble Citation2010; Vanclay et al. Citation2015). Generally, this literature identifies best practices through a combination of literature review and author judgement. We prepared an initial list of recommended practices from the EA literature with attention to how each practice was developed (i.e., author assertion, expert panel, survey research, case study findings, and/or other methods), the degree of rigour involved, the number of sources supporting the practice, and the level of consensus across authors and sources for the practice.

Next we reviewed relevant literature in related fields including: megaproject planning, environmental policy, risk assessment, administrative law, public policy, and planning. We then integrated relevant findings from this additional literature review into our list of recommended practices, again with attention to how the best practices were developed and the level of support for each practice across authors. Supplemental literature was located through web searches (e.g., on appeal systems, cumulative effects management, evaluation methods, best practices for policy and planning) and by tracing best practice ideas through the sources identified in bibliographies of materials already reviewed. This list of practices was then organized into common themes and integrated to eliminate overlap and promote coherence.

The next step was to survey EA experts to test practices on the compiled list. Due the large number of practices it was not feasible to test them all in the survey. Therefore we did not test those practices in the survey that had a high level of consensus support in the literature we reviewed. We defined high level of consensus as meaning that the practice is recommended in the literature and not explicitly criticized or opposed by any author. Practices that did not meet this test were examined in the survey by asking respondents to appraise the practice or rank it relative to other options and to provide additional commentary on the practice in open-ended questions. The final step was to use the literature and survey results to prepare a recommended list of good EA practices. Our recommended list identifies the sources for each practice and whether the practice was tested in our survey of experts (Table ).

Table 1 Good practices in EA.a

The survey was conducted through an online questionnaire sent to experts across Canada with extensive knowledge and experience with the Alberta provincial and Canadian federal EA processes as they apply to tar sands development in Alberta. Tar sands development was used as the context for the survey questions because it is one of the most prominent types of development in Canada subject to EA and it has involved provincial as well as federal EA processes; therefore, it provides a good testing ground for recommended EA practices (Joseph Citation2013).

Potential survey respondents were identified through an iterative process of contacting relevant experts from government agencies and non-government organizations, EA researchers in Canada who had published on the topic of tar sands EA, and people recommended by respondents during previous rounds of contact. Invitations were initially sent to 189 people, but some declined to participate and some did not fully complete the questionnaire. In the end, 75 experts completed the confidential questionnaire. Respondents ranged from across the federal, Alberta provincial, and local governments as well as industry, consultancies, and Aboriginal, environmental and citizen groups (Figure ). Respondents had an average of 10.3 years of experience with the tar sands EA process. Unfortunately, there was very limited participation from two of the offical government agencies involved in tar sands EA: only one staff member of the Alberta Energy Regulator (formerly called the Energy Resources Conservation Board) responded, and no employees of the Canadian Environmental Assessment Agency participated. The limited participation from these two government agencies was due to government decisions to restrict their agencies' participation in external surveys. However, former employees of both agencies did participate in the survey.

Figure 1 Respondents as characterised by themselves.
Figure 1 Respondents as characterised by themselves.

3. Description of good practices

Based on the results of our literature review and survey we identify 74 good practices grouped under 22 themes (Table ). This range of good practices reflects the multi-faceted complexity of EA and the fact that the overall success of EA is dependent on addressing a great many factors. Deficiencies in any one dimension may undermine the rest of the process.

A related and interesting feature of the recommended practices is the breadth of the potential intellectual foundation for EA. Through our literature review it became apparent that the field of EA can benefit from the guidance from a great many bodies of knowledge, including megaproject planning, environmental policy, risk assessment, administrative law, public policy, and planning. Surprisingly, there has been little explicit cross-fertilization between the EA literature and these other related fields.

Another notable feature of our recommended practices is that high-quality information – a critical element of sound EA – is the product of many of the practices (Figure ). Essentially, the broader body of good practices must be attended to in order for decision-makers to have the information required for good decision-making.

Figure 2 High-quality information as a function of good practice.
Figure 2 High-quality information as a function of good practice.

4. Contentious issues

In this section, we present the results of our survey on contentious issues in EA practice. For each practice, we briefly explain the issue and then discuss the survey results, including our interpretation of responses to relevant open-ended questions.

4.1 Impact assessment studies

A fundamental question in designing EA is who should be responsible for completing the impact assessment (IA) studies to be used in the EA. Considerations affecting this decision include the need for project designers to work together with impact assessors to optimize mitigation strategies and the need to minimize bias that may be introduced by having the project proponent prepare the IA (Hollick Citation1984; Wathern Citation1988; Nikiforuk Citation1997; Herring Citation2009). We tested five alternative models for allocating responsibility for managing and paying for IA (Table ). Respondents strongly favoured using an independent assessor, with 39% preferring that the proponent pay for the independent assessor and 14% favouring the government paying, for a total of 53%. In contrast, 30% supported the current approach of relying on the project proponent. Respondents supporting independent assessment expressed concern about ensuring independence in preparing the EA to minimize potential bias and did not express concern about potential consequences of separating impact assessors from project designers (such as impeding integration of project design with mitigation, and impeding financial accountability by separating funding of studies by the proponent from the independent assessor undertaking the studies). Designing an independent assessment system that addresses these issues requires additional research.

Table 2 Rankings of different models for who conducts and pays for impact assessment studies.

4.2 Review body

Key issues associated with the design of the entity responsible for overseeing and managing EA include the degree of independence from government, efficiency of the process, level of expertise involved, authority and resources available, accountability to the public, and ‘siloing’ (or fragmentation) of information and decision-making processes across government (Warrack Citation1993; Sadar Citation1996; Vanclay Citation2003; Wood Citation2003; Kennett Citation2006; Van Hinte et al. Citation2007). We identified three models for managing the EA process from the literature (Table ). Respondents ranked the independent review model the highest, citing its independence as the key benefit. The main problem with the temporary review body model according to respondents' open-ended answers is lack of continuity in staff, and the main problem with the government department model is potential bias. Respondents did not express any concerns regarding potential siloing (i.e., a disconnect between the review body and the concerns and knowledge across arms of government) with the independent review body option.

Table 3 Survey results ranking different review body models.

4.3 Final decision-maker

Another important issue in EA is who should make the final decision. Key considerations include ensuring that the EA process is democratically accountable while ensuring that decision-makers have sufficient expertise and are unbiased. In Canada, final decision-making responsibility for major projects is typically assigned to elected officials based on recommendations received from review panels appointed by government. This model is intended to maintain democratic accountability while utilizing the advice of individual experts that have at least some degree of independence (Gunton et al. Citation2004; Van Hinte et al. Citation2007). We evaluated four alternative models for final decision-making in our survey, including the current Canadian model of elected officials making final decisions based on recommendations from independent review bodies (Table ). While the largest group of respondents preferred the independent final decision-maker model (47%), almost one-third of respondents (28%) preferred a consensus-based stakeholder table and 26% preferred elected officials guided by recommendations from either an independent body or a stakeholder table. Although many respondents favoured independence, some respondents expressed concern that independent bodies are not directly accountable to the general public. To some degree, this concern for democratic accountability can be addressed by other design options regarding decision criteria and stakeholder engagement discussed below. However, given the concern for democratic accountability in the literature and among survey respondents, more research on the role of elected officials versus independent review bodies and ways of making independent decision-makers more democratically accountable is warranted.

Table 4 Survey results ranking of different models of who conducts final decision-making.

4.4 Time

A common issue in EA is whether to impose time limits for completing the EA process. The argument in favour of a time limit is that it provides increased certainty for project investors and may improve the overall efficiency of the process, while the argument against a time limit is that it may not allow sufficient opportunity for a fulsome review (Van Hinte et al. Citation2007). Seventy-six percent of respondents thought that a legal constraint on time was either “very important” (20%), “important” (28%), or “somewhat important” (28%); only 20% thought that a constraint on time was ‘not important’. Many respondents did indicate that efficiency must be balanced with ensuring that sufficient time is available for a high-quality review. Respondents commented that sufficient time must be given to all parties involved, and that a process that limits the time for government and stakeholders to scrutinize applications is unfair given that proponents are typically far less restricted in how long they have to assemble their applications. One respondent wrote that, ‘it is more important that the review be done correctly rather than steward to some legal time commitment.’ Therefore, while there is strong support for the concept of legal limits on the length of the EA process, the time limits need to be carefully designed to ensure fairness and sufficient length and flexibility in their application to allow for adequate project review.

4.5 Methods of impact assessment

Choosing appropriate methods for analysis is critical to the success of IA. Two methods that receive special attention in the megaproject literature are reference class forecasting (RCF), and cost-benefit analysis (CBA). RCF works by comparing forecasts of the impacts of the project under review to the realized impacts of a class of similar projects (Lovallo & Kahneman Citation2003). Flyvbjerg (Citation2007, p. 28) explains that RCF ‘does not try to forecast the specific uncertain events that will affect the particular project, but instead places the project in a statistical distribution of outcomes from this class of reference projects’. CBA is based on welfare economics and identifies and compares a project's positive and negative impacts to arrive at a discounted net present value. CBA is highly supported in the field of project evaluation as the principal method for examining the value of projects to society (e.g., Davis Citation1990) but is also the subject of much criticism (e.g., Atkinson & Mourato Citation2008). Currently, neither method (RCF or CBA) is used in tar sands EA.

Fifty-seven percent of respondents felt that the methods currently used to assess tar sands projects in Canada are inappropriate, and respondents were highly supportive of both RCF and CBA in EA. A total of 84% of respondents agreed that RCF is very important (27%), important (32%), or somewhat important (25%) in EAs, and 91% agreed that CBA is very important (43%), important (29%), or somewhat important (19%). However, respondents' comments indicated that support for CBA is qualified. Many respondents were concerned that the monetary value of economic benefits is easily measured while social and environmental impacts are much harder to monetize. Several respondents also noted that Aboriginal rights issues cannot be addressed through CBA. To help address these concerns, it may be prudent to use a variant of CBA such as multiple accounts analysis, which better addresses impacts that cannot be easily monetized and identifies the distribution of costs and benefits by stakeholder group (Shaffer Citation2010). We acknowledge, however, that while multiple accounts analysis may be useful in identifying the impact of projects on Aboriginal interests, additional measures will be required to address Aboriginal rights in the EA process.

4.6 Mitigation and compensation

Mitigation has been used for many years as a tool to promote sustainability in project development. According to several authors, mitigation of adverse project impacts should be mandatory, and if mitigation measures will not work, developers should be required to compensate all groups left worse off by a project to ensure that externalized costs are included in the project assessment (Gunton et al. Citation2004; Van Hinte et al. Citation2007; Wozniak Citation2007). However, Rajvanshi (Citation2008) cautions that compensation should be a last resort to address project harms because the compensation may not be a sufficient substitute for the original asset.

Respondents to our survey strongly supported a requirement to mitigate and compensate for negative impacts (Table ). However, several respondents felt that there should be limits on what harms must be mitigated and when compensation should occur. Respondents thus raised a key issue in mitigation: what minimum thresholds must be passed for a negative impact to be harmful enough to require mitigation or compensation? Respondents offered numerous ideas, such as the need to compensate for economic harms, but only if they are not temporary, and that mitigation should be restricted to impacts occurring within a certain geographical boundary, such as the jurisdiction in which development occurs. Therefore, while there is strong support for the principle of compensation, additional research is required on defining appropriate terms and conditions for providing compensation.

Table 5 Level of importance of mitigation and compensation.

4.7 Decision criteria and discretion

Another matter of debate is whether decision-makers should be constrained by detailed decision criteria or should have broad discretion. Some analysts (Gibson Citation1993; Van Hinte et al. Citation2007) argue that discretion should be constrained by explicit criteria to reduce the role of subjective bias of decision-makers and to ensure decisions are consistent and transparent. Explicit decision criteria can also increase democratic accountability by requiring independent experts to follow guidelines developed by elected officials. Other analysts (e.g., Wood Citation1995) note that discretion is required to address the diversity of circumstances encountered in EA.

Respondents to our survey strongly favoured constraining discretion. Seventy-two percent and 61% of respondents felt that the discretion of elected and non-elected decision-makers, respectively, should be heavily constrained by explicit and legally binding decision-making rules. However, some respondents cautioned that some discretion is necessary. As one respondent commented, ‘highly constrained decision making is not decision making – it makes the decisions ahead of time in setting the constraints. That can be appropriate for routine (small) projects that are low risk … [but] major projects require specific considerations and decisions.’

4.8 Stakeholder participation

Another basic principle in EA is that stakeholders should participate in and have the capacity to influence the outcome of the process (Senecal et al. Citation1999). There are many models of stakeholder participation (Beierle & Cayford Citation2002); these models can be grouped into three categories: information sharing, consultation, and collaboration (Gunton et al. Citation2010). Information sharing informs stakeholders without seeking their input; consultation seeks input from stakeholders through an interactive process without formal obligation to incorporate this input into the final decision; and collaboration engages stakeholders in an interactive process that ensures incorporation of stakeholders' views into the decision by seeking stakeholder endorsement of decisions through negotiation. We tested versions of all three models in the survey (Table ).

Table 6 Survey results ranking different models of stakeholder involvement.

The quasi-judicial model, a variant of the consultation model in which stakeholders may submit evidence and cross-examine witnesses, received the highest ranking from respondents. This is the model typically used in Canada for EA of large projects. The collaborative model was the first choice of almost as many respondents as the quasi-judicial model (36% vs 43%) but received an average ranking slightly below that of the gathering stakeholder input model. The lower average ranking for the collaborative model relative to the quasi-judicial model by respondents is inconsistent with the trend in recent literature that favours collaborative models and concepts such as free, prior and informed consent on the grounds that collaboration ensures a higher degree of stakeholder engagement and is more likely to generate decisions that meet the interests of all parties (Busenberg Citation1999; Susskind et al. Citation2003; Frame et al. Citation2004; Innis & Booher Citation2010; Morgan Citation2012; Vanclay et al. Citation2015). Also, the survey results may reflect higher familiarity of respondents with the quasi-judicial model or concerns about timeliness and efficiency. Finally, note that the survey posed collaborative and quasi-judicial models as either/or options when they could be used in conjunction with each other by requiring collaboration prior to the commencement of a quasi-judicial process, thus achieving the potential benefits of both approaches. Given the lack of clear preference in the survey results, more research on the merits of collaborative models versus quasi-judicial models is warranted.

4.9 Precautionary practices

Effective EA processes need to adopt various strategies to manage the risks inherent in the development of large projects. At a minimum the process should identify sources of uncertainty, describe and analyse risk, develop and implement strategies to reduce risk, monitor the effectiveness of mitigation, and ensure that proponents adjust procedures and activities if monitoring results are poor (Collingridge Citation1992; ICPGSIA Citation2003; Gibson et al. Citation2005). However, additional precautionary strategies might also be taken.

One recommended strategy is to apply a form of the precautionary principle (Vanclay Citation2003). Another strategy is to incorporate active adaptive management into the process whereby uncertainties are actively probed (Gibson et al. Citation2005; Van Hinte et al. Citation2007). A third strategy is to employ a trial-and-error approach to test new technologies on small-scale pilot projects to prove their effectiveness prior to general application (Collingridge Citation1992). A fourth strategy is to assign liability at the time of approval for unforeseen consequences. Respondents strongly supported all of these strategies (Table ).

Table 7 Respondents’ views of the importance of several precautionary practices.

4.10 Appeal mechanisms

It is well accepted in the literature that stakeholders should have the right to appeal EA decisions (Gibson Citation1993; Altshuler & Luberoff Citation2003; Wood Citation2003; Vanclay et al. Citation2015). However, many of the details of the appeal process remain unresolved. One issue is whether appeals should be limited to dealing with procedural issues (i.e., were all procedural requirements carefully followed?) or should consider matters of substance as well (i.e., was the logic, reasoning, and interpretation of evidence correct?). According to 61% of respondents, stakeholders should be able to appeal EA decisions on matters of both procedure and substance, though several respondents noted that there still must be a test to ensure that appeals are not groundless. Only 3% of respondents opposed having an appeal process.

A second issue is defining who should have standing to initiate an appeal. Forty-seven percent of respondents thought that the right to appeal should be restricted to parties with a direct pecuniary and/or property interest impacted by the project but excluding the proponent, 34% thought the right should be extended to the project proponent, and 19% thought that any interested party should have the right to appeal.

A third issue is who should hear appeals. Key considerations are ensuring that the appeal is heard by a party that has sufficient expertise in the substance of the matters and is impartial (Gibson Citation1993; Lawrence Citation2003; Van Hinte et al. Citation2007). Common options for who hears appeals include: the EA review body that issued the decision, the courts, or a specialized appeal body with expertise in EA. Two-thirds of respondents felt that appeals should not be heard by the original decision-maker. Comments pointed to concerns regarding potential bias in self-review. We did not ask respondents to rate the courts versus specialized EA appeal options due to the lack of experience in Canada with the latter. However, we note that while judges may have expertise on questions of legal procedure, they lack expertise on substantive scientific issues in EA and in Canada at least have been reluctant to consider matters of substance in appeals (Tollefson Citation2003). Consequently, a specialized appeal body with expertise in EA may be superior to relying exclusively on the courts to hear appeals.

Regardless of what option is chosen, respondents were divided on what powers the appellate body should have. Just over half (51%) felt that the appellate body should send the decision back for reconsideration, 43% felt that the appellate body should have authority to choose whether to replace the decision or send it back for reconsideration, and 6% felt that the appellate body should have the authority to replace the decision. Clearly, defining the powers of the appellate body requires further investigation.

4.11 Public investment in projects

Governments sometimes invest in projects when private markets undersupply or are unable to provide necessary infrastructure or other public goods (Bruzelius et al. Citation2002; Flyvbjerg et al. Citation2003). Governments may also have a direct pecuniary interest in promoting project development, particularly when the project generates royalties and taxes (Gunton Citation2003b). Although government investment in development may not have any direct impact on the structure of the EA review process, such investment can be an important factor affecting EA, as it may bias EA decision-making or lead to the promotion of uneconomic projects. Consequently, the megaproject literature strongly recommends that governments minimize financial involvement in projects and focus on their regulatory role of public safeguard (Bruzelius et al. Citation2002; Gunton Citation2003a; Flyvbjerg Citation2008).

When asked if they support government financial involvement in large-scale project development, 61% of respondents said that ‘it depends’, 22% said no, and 14% said yes. The dominant rationale among supporters of government financial involvement was the perception that financial support may sometimes be required: 46% of those who supported government involvement felt that private developers have too little incentive to provide public goods, and 31% of the same sub-group felt that private developers are unwilling or unable to shoulder the full costs of development. Many respondents pointed out the need for impartial decision-making if government becomes financially involved – 85% of those who said government shouldn't get financially involved felt that public investment would bias EA decisions and 55% of this same sub-group felt that government involvement would subsidize uneconomic development.

4.12 Tenure decision-making and project review

A common issue in EA of resource development projects is whether tenure decisions allocating access, entitlement, and consent to use public land and natural resources should take place within the EA process or be dealt with as a separate process in advance of EA. The argument in support of making tenure decisions in advance of EA is that project applicants need certainty of tenure in order to justify the cost of preparing an EA application and participating in an EA process. The counter arguments are that separation of tenure decision-making from the EA review will result in tenure decisions being made without full consideration of environmental consequences of development, and that the granting of tenure may introduce an expectation of development that could bias the EA process (Kennett et al. Citation2006; Droitsch et al. Citation2008). Fifty-six percent of respondents supported integration of tenure and permitting decisions with EA as a single process, 29% thought that tenure, EA decisions, and permitting should occur in multiple decision-making steps, and 14% were unsure. Several respondents commented that granting tenure prior to EA introduces a bias in favour of approval into EA decision-making. However, respondents did not indicate how to ensure that project proponents would have sufficient incentive to initiate an EA application without having secure tenure rights. Therefore integration of allocation of tenure rights with the EA process is an area requiring further investigation.

5. Discussion

Our list of 74 good practices (Table ) synthesizes what was learned in the literature review and survey into a cohesive body of good practices for effective EA. Our research addressed some of the deficiencies in the existing literature on EA best practices by incorporating findings from other relevant fields of research and transparently validating the more contentious issues with a survey of experts. The results reveal a broad array of factors that a good EA process needs to address if it is to fulfil the objective of propelling sustainability. We acknowledge, though, that the complexity and breadth of the good practices poses a formidable challenge to those designing and revising EA law, policy, and practice.

Further study of the causal connections between EA practices and outcomes is needed, and there are good reasons to expect that instituting these practices will contribute to more effective assessments. Sadler's (Citation1996) seminal international study of EA effectiveness identifies three main types of effectiveness for EA – procedural, transactive, and substantive – and some subsequent researchers recognize normative effectiveness as an additional dimension (Chanchitpricha & Bond Citation2013). Procedural effectiveness involves conforming with ‘established provisions and principles’ (Sadler Citation1996, 39). There is a clear link between good practices and this form of effectiveness. A comprehensive and detailed set of good practices establishes procedural standards for practitioners and other participants to follow as they undertake assessments, and provides criteria for evaluators to use in judging the quality of EA. Transactive effectiveness emphasizes efficient use of time, money and other resources (Sadler Citation1996). Two of the themes in our best practices directly address transactive effectiveness (‘process management’ and ‘resources and time’), and several practices listed under other themes also foster more efficient assessments. Substantive and normative effectiveness are concerned with whether EAs achieve their specified objectives and broader societal goals (Chanchitpricha & Bond Citation2013). Judgments about such matters will vary with the perspectives of evaluators, including their views about the proper goals and objectives for EA and its role in decision-making (Morgan Citation2012). We relied on the extensive accumulated experience and wisdom of our survey respondents, along with that of previous authors of EA best practices, to identify practices that EA experts associated with effectiveness. We also note that our recommended best practices target the factors that Chanchitpricha and Bond (Citation2013) identify as influencing substantive effectiveness, including the regulatory framework, the decision-making context, stakeholder and public participation, and the quality of the impact assessment report. Moreover, our best practices align well with the ‘effectiveness criteria’ developed by Hanna and Noble (Citation2015) from their Delphi survey of EA experts.

In addition to the need for further research linking practices with EA outcomes, there are several other limitations of our research that should be addressed in future work. First, our survey results are based on the responses of experts involved in tar sands EA, and these responses may not be valid for all other contexts (Joseph Citation2013). This limitation is offset to some degree because we drew our initial practices from the literature on EA, project review, and planning in a variety of contexts, and we formulated our recommended list of good practices in a manner that provides flexibility to adapt to local circumstances. Our list of good practices might be thought of as principles that should be adapted and refined to specific contexts (Fischer & Gazzola Citation2006; Runhaar & Driessen Citation2007). However, given the contextual framing of our expert survey, the practices may be best suited to the assessment of large projects in jurisdictions with well-functioning institutions. Future researchers should continue to advance EA theory by exploring the relationships between good practices and different review contexts.

A second limitation is that while our survey did gather data on respondent affiliations (e.g., industry, government, Aboriginal group) our method did not allow us to distinguish patterns in the data that might indicate biases or preferences for certain practices by worldview or position within the EA process. Our survey allowed people to define their own labels to describe their affiliation(s), and numerous respondents had multiple affiliations. This meant that we were not able to determine patterns in response data by respondent affiliation with confidence.

Finally, while we have attempted to develop a comprehensive list of good practices for EA, a number of issues remain unresolved. These priority issues for future research include:

  • integrating project design prepared by project proponents with the impact assessment studies prepared by independent assessors;

  • achieving the appropriate balance between independent decision-making authority by review panels and democratic accountability by elected officials;

  • evaluating the role of innovative conflict resolution models such as collaborative, consensus-based stakeholder negotiations and free and prior informed consent in project review;

  • designing time limit requirements that achieve an appropriate balance between certainty, efficiency, fairness, and flexibility;

  • improving procedures for determining compensation for negatively impacted stakeholders;

  • determining the structure and authority of EA appeal bodies;

  • determining EA decision criteria that are specific enough to achieve consistency in outcomes by limiting unwarranted discretion while flexible enough to adapt to the unique circumstances of each application; and

  • integrating tenure allocation decisions into the EA process while providing sufficient certainty and incentives for project developers to prepare a project approval application without biasing EA decisions.

6. Conclusion

In this article we review and extend the existing EA best practice literature by synthesizing guidance from additional relevant literature in other fields and by validating more controversial practices through a survey of EA experts and practitioners involved in tar sands EA in Canada. Based on this research we identify 74 good practices for EA.

Our findings indicate that EA is a complex, interdependent system that requires implementation of all of the good practices to function effectively but in a manner that respects the unique context of the application. We recommend further research evaluating the causal connections between good practices and EA outcomes to better understand the relationship between procedural and substantive effectiveness, and on several priority areas that remain unresolved. With additional research in these areas, EA good practices can be further revised and refined over time to help EA achieve the fundamental objective of ensuring that development is sustainable.

Acknowledgements

We would like to thank the Social Sciences and Humanities Research Council, Simon Fraser University (President's PhD Scholarship), Waterhouse Graduate Fellowship in Organizational Change and Innovation, and Jake McDonald Memorial Scholarship for financial assistance. We also thank the editor and two anonymous reviewers for constructive suggestions.

Disclosure statement

None of the three authors have any conflicts of interest; this work was completed as part of the first author's PhD research (the second and third authors were supervisors of the research).

Notes

References

  • Ahammed R, Harvey N. 2004. Evaluation of environmental impact assessment procedures and practice in Bangladesh. Impact Assess Proj Apprais. 22:63–78. doi:10.3152/147154604781766102.
  • Ahmad B, Wood C. 2002. A comparative evaluation of the EIA systems in Egypt, Turkey and Tunisia. Environ Impact Assess Rev. 22:213–234. doi:10.1016/S0195-9255(02)00004-5.
  • Allen C. 2004. Reducing uncertainty. Public Roads. 68:34–39.
  • Altshuler A, Luberoff D. 2003. Mega-projects: the changing politics of urban public investment. Washington, DC and Cambridge, MA: Brookings Institution Press and Lincoln Institute of Land Policy.
  • Andre P, Enserink B, Connor D, Croal P. 2006. Public participation international best practice principles. Fargo, ND.
  • Archer K, Gibbons R, Knopff R, Pal LA. 1999. Parameters of power: Canada's political institutions. 2nd ed. Toronto: ITP Nelson.
  • Arnstein SR. 1969. A ladder of citizen participation. J Am Plann Assoc. 35:216–224.
  • Ascher W. 1993. The ambiguous nature of forecasts in project evaluation: diagnosing the over-optimism of rate-of-return analysis. Int J Forecasting. 9:109–115. doi:10.1016/0169-2070(93)90058-U.
  • Atkinson G, Mourato S. 2008. Environmental cost-benefit analysis. Annu Rev Environ Resourc. 33:317–344. doi:10.1146/annurev.environ.33.020107.112927.
  • Bardach E. 2004. Presidential address – The extrapolation problem: how can we learn from the experience of others? J Pol Anal Manage. 23:205–220. doi:10.1002/pam.20000.
  • Barget E, Gouguet J-J. 2010. Hosting mega-sporting events: which decision-making rule? Int J Sport Finance. 5:141–162.
  • Baxter W, Ross WA, Spaling H. 2001. Improving the practice of cumulative effects assessment in Canada. Impact Assess Proj Apprais. 19:253–262. doi:10.3152/147154601781766916.
  • BC Auditor General. 2011. An audit of the environmental assessment office's oversight of certified projects. Victoria, BC.
  • Beierle TC, Cayford J. 2002. Democracy in practice: public participation in environmental decisions. Washington, DC: Resources for the Future.
  • Benevides H, Kirchhoff D, Gibson R, Doelle M. 2008. Law and policy options for strategic environmental assessment in Canada. Ottawa: Canadian Environmental Assessment Agency.
  • Blair SR, Carr SG. 1981. Major Canadian projects, major Canadian opportunities: a report by the Consultative Task Force on Industrial and Regional Benefits from Major Canadian Projects.
  • Bond A, Morrison-Saunders A, Pope J. 2012. Sustainability assessment: the state of the art. Impact Assess Proj Apprais. 30:53–62. doi:10.1080/14615517.2012.661974.
  • Boyd D. 2003. Unnatural law: rethinking Canadian environmental law and policy. Vancouver, BC: UBC Press.
  • Bradshaw B. 2003. Questioning the credibility and capacity of community-based resource management. Can Geogr. 47:137–150. doi:10.1111/1541-0064.t01-1-00001.
  • Bruzelius N, Flyvbjerg B, Rothengatter W. 2002. Big decisions, big risks. Improving accountability in mega projects. Transp Policy. 9:143–154. doi:10.1016/S0967-070X(02)00014-8.
  • Busenberg GJ. 1999. Collaborative and adversarial analysis in environmental policy. Policy Sciences. 32:1–11. doi:10.1023/A:1004414605851.
  • CAPP. 2006. Industry practices: developing effective working relationships with aboriginal communities. Calgary: Canadian Association of Petroleum Producers; [cited 2009 July 6]. Available from: http://membernet.capp.ca/raw.asp?x=1&dt=NTV&dn=100984.
  • Cashmore M, Gwilliam R, Morgan R, Cobb D, Bond A. 2004. The interminable issue of effectiveness: substantive purposes, outcomes and research challenges in the advancement of environmental impact assessment theory. Impact Assessment and Proj Apprais. 22:295–310. doi:10.3152/147154604781765860.
  • CCME. 2009. Regional strategic environmental assessment in Canada: principles and guidance. Winnipeg: Canadian Council of Ministers of the Environment.
  • CEARC. 1998. Evaluating environmental impact assessment: an action prospectus. Hull, QC: Canadian Environmental Assessment Research Council.
  • Chanchitpricha C, Bond A. 2013. Conceptualising the effectiveness of impact assessment processes. Enviro Impact Assessment Rev. 43:65–72. doi:10.1016/j.eiar.2013.05.006.
  • Chicken JC. 1994. Managing risks and decisions in major projects. New York: Chapman and Hall.
  • Cocklin C, Kelly B. 1992. Large-scale energy projects in New Zealand: whither social impact assessment? Geoforum. 23:41–60. doi:10.1016/0016-7185(92)90035-3.
  • Collingridge D. 1992. The management of scale: big organizations, big decisions, big mistakes. New York: Routledge.
  • Cooke RM. 1991. Experts in uncertainty: opinion and subjective probability in science. New York: Oxford University Press.
  • Creasey R, Ross WA. 2009. The Cheviot Mine Project: cumulative effects assessment lessons for professional practice. In: Environmental impact assessment: practice and participation. 2nd ed. Don Mills, ON: Oxford University Press; p. 158–172.
  • Davis HC. 1990. Regional economic impact analysis and project evaluation. Vancouver: University of British Columbia Press.
  • de Bruijn H, Leijten M. 2008. Management characteristics of mega-projects. In: Priemus H, Flyvberg B, van Wee B, editors. Decision-making on mega-projects. Northampton, MA: Edward Elgar; p. 23–39.
  • Diez MA. 2001. New approaches to evaluating regional policy: the potential of a theory-based approach. Greener Manage Int. 2001:37–49. doi:10.9774/GLEAF.3062.2001.wi.00006.
  • Donnelly A, Dalal-Clayton B, Hughes R. 1998. A directory of impact assessment guidelines. 2nd ed. International Institute for Environment and Development.
  • Doyle D, Sadler B. 1996. Environmental assessment in Canada: frameworks, procedures & attributes of effectiveness. A report in support of the International Study of the Effectiveness of Environmental Assessment. Ottawa: Canadian Environmental Assessment Agency.
  • Droitsch D, Kennett SA, Woynillowicz D. 2008. Curing environmental dis-integration: a prescription for integrating the government of Alberta's strategic initiatives. Drayton Valley/Canmore, AB: The Pembina Institute and The Water Matters Society of Alberta.
  • Duinker PN, Greig LA. 2006. The impotence of cumulative effects assessment in Canada: ailments and ideas for redeployment. Environ Manage. 37:153–161. doi:10.1007/s00267-004-0240-5.
  • EMMRPIWG. 2008. Key factors for environmental assessment/regulatory success in Canada's mining and energy sectors. Submission for the 2008 Energy and Mines Ministers’ Conference.
  • Esteves AM, Franks D, Vanclay F. 2012. Social impact assessment: the state of the art. Impact Assess Proj Apprais. 30(1):34–42. doi:10.1080/14615517.2012.660356.
  • Égré D, Senécal P. 2003. Social impact assessments of large dams throughout the world: lessons learned over two decades. Impact Assess Proj Apprais. 21:215–224.
  • Failing L, Gregory R, Harstone M. 2007. Integrating science and local knowledge in environmental risk management: a decision-focused approach. Ecol Econ. 64:47–60. doi:10.1016/j.ecolecon.2007.03.010.
  • Fiorino DJ. 1989. Technical and democratic values in risk analysis. Risk Anal. 9:293–299. doi:10.1111/j.1539-6924.1989.tb00994.x.
  • Fiorino DJ. 1990. Citizen participation and environmental risk: a survey of institutional mechanisms. Sci, Technol Human Values. 15:226–243. doi:10.1177/016224399001500204.
  • Fischer TB, Gazzola P. 2006. SEA effectiveness criteria equally valid in all countries? The case of italy. Environ Impact Assess Rev. 26:396–409. doi:10.1016/j.eiar.2005.11.006.
  • Flyvbjerg B. 2007. Megaproject policy and planning: problems, causes, cures. Aalborg: Aalborg University.
  • Flyvbjerg B. 2008. Public planning of mega-projects: overestimation of demand and underestimation of costs. In: Priemus H, Flyvberg B, van Wee B, editors. Decision-making on mega-projects. Northampton MA: Edward Elgar; p. 120–144.
  • Flyvbjerg B, Bruzelius N, Rothengatter W. 2003. Megaprojects and risk: an anatomy of ambition. New York: Cambridge University Press.
  • Forbes RS, Hazell S, Kneen J, Paterson J, Sinclair J. 2012. Environmental assessment law for a healthy, secure and sustainable Canada: a checklist for strong environmental laws. Vancouver and Ottawa: West Coast Environmental Law; [cited 2012 June 28]. Available from: http://wcel.org/sites/default/files/publications/A%20Checklist%20for%20Strong%20Environmental%20Laws%20February%202012.pdf.
  • Frame TM, Gunton TI, Day JC. 2004. The role of collaboration in environmental management: an evaluation of land and resource planning in British Columbia. J Environ Plann Manage. 47:59–82. doi:10.1080/0964056042000189808.
  • Gibson RB. 1993. Environmental assessment design: lessons from the Canadian experience. Environ Prof. 15:12–24.
  • Gibson RB. 2006. Sustainability assessment: basic components of a practical approach. Impact Assess Proj Apprais. 24:170–182. doi:10.3152/147154606781765147.
  • Gibson RB, Hanna KS. 2009. Progress and uncertainty: the evolution of federal environmental assessment in Canada. In: Environmental impact assessment: practice and participation. 2nd ed. Don Mills, ON: Oxford University Press; p. 18–36.
  • Gibson RB, Walker A. 2001. Assessing trade: an evaluation of the Commission for Environmental Cooperation's analytic framework for assessing the environmental effects of the North American Free Trade Agreement. Environ Impact Assess Rev. 21:449–468. doi:10.1016/S0195-9255(01)00085-3.
  • Gibson RE, Hassan S, Holtz S, Tansey J, Whitelaw G. 2005. Sustainability assessment: criteria, processes and applications. Sterling, VA: Earthscan Publications.
  • Gray B. 1989. Collaboration finding common ground for multiparty problems. San Francisco: Jossey-Bass Publications.
  • Green TL. 1997. Accounting for natural capital in BC: forestry and conflict in the Slocan Valley. Victoria, BC: University of Victoria.
  • Gregory R, Ohlson D, Arvai JL. 2006. Deconstructing adaptive management: criteria for applications to environmental management. Ecol Appl. 16:2411–2425. doi:10.1890/1051-0761(2006)016[2411:DAMCFA]2.0.CO;2.
  • Greig LA. 2008. Refocusing cumulative effects assessment. Proceedings of the IAIA 2008: Assessing and managing cumulative environmental effects.
  • Gunton T, Vertinsky I. 1990. Methods of analysis for forest land use allocation in British Columbia: options and recommendations. Prepared for the British Columbia Round Table on the Environment and the Economy. Victoria, BC.
  • Gunton TI. 1992. Evaluating land use tradeoffs: a review of selected techniques. Environments. 21:53–63.
  • Gunton TI. 2003a. Megaprojects and regional development: pathologies in project planning. Reg Stud. 37:505–519. doi:10.1080/0034340032000089068.
  • Gunton TI. 2003b. Natural resources and regional development: An assessment of dependency and comparative advantage paradigms. Econ Geogr. 79:67–94. doi:10.1111/j.1944-8287.2003.tb00202.x.
  • Gunton TI, Day JC, Calbick KS, Johnsen S, Joseph C, McNab J, Peter T-D, Silcox K, Van Hinte T. 2004. A review of offshore oil and gas development in British Columbia. Burnaby, BC.
  • Gunton TI, Day JC, Van Hinte T. 2005. Managing impacts of major projects: an analysis of the Enbridge Gateway Pipeline Project. Prepared for Coastal First Nations. Burnaby, BC.
  • Gunton TI, Joseph C. 2007. Toward a national sustainable development strategy for Canada: putting Canada on the path to sustainability within a generation. Vancouver: David Suzuki Foundation. Available from: http://www.davidsuzuki.org/files/SWAG/NSDS-Rpt-full-Eng.pdf.
  • Gunton TI, Rutherford MB, Dickinson M. 2010. Stakeholder analysis in marine planning. Environments. 37:95–110.
  • Hall P. 1980. Great planning disasters. Berkeley and Los Angeles, CA: University of California Press.
  • Hanley N, Spash CL. 1993. Cost-benefit analysis and the environment. Northampton, MA: Edward Elgar.
  • Hanna K, Noble BF. 2015. Using a Delphi study to identify effectiveness criteria for environmental assessment. Impact Assess Proj Apprais. 33:116–125. doi:10.1080/14615517.2014.992672.
  • Haveman RH. 1976. Policy analysis and the Congress: an economist's view. Policy Anal. 2:235–250.
  • Herring RJ. 2009. The Canadian federal EIA system. In: Environmental impact assessment: practice and participation. 2nd ed. Don Mills, ON: Oxford University Press; p. 281–297.
  • Hessing M, Howlett M, Summerville T. 2005. Canadian natural resource and environmental policy: political economy and public policy. Vancouver: UBC Press.
  • Hicks TD. 2011. Exploring the use of arguments in impact assessment: a case for impact significance arguments. Victoria, BC: Royal Roads University.
  • Hierlmeier JL. 2008. “The public interest”: can it provide guidance for the ERCB and NRCB? J Environ Law Pract. 18:279–311.
  • Hochschorner E, Finnveden G. 2003. Evaluation of two simplified life cycle assessment methods. Int J LCA. 8:119–128. doi:10.1007/BF02978456.
  • Hollick M. 1981. Role of quantitative decision-making methods in environmental impact assessment. J Environ Manage. 12:65–78.
  • Hollick M. 1984. Who should prepare environmental impact assessments? Environ Manage. 8:191–196. doi:10.1007/BF01866960.
  • ICPGSIA. 2003. Principles and guidelines for social impact assessment in the USA: the interorganizational committee on principles and guidelines for social impact assessment. Impact Assess Proj Apprais. 21:231–250. doi:10.3152/147154603781766293.
  • Innis JE, Booher DE. 2010. Planning with complexity: an introduction to collaborative rationality for public policy. New York: Taylor & Francis.
  • Irvin RA, Stansbury J. 2004. Citizen participation in decision making: is it worth the effort? Public Administration Rev. 64:55–65. doi:10.1111/j.1540-6210.2004.00346.x.
  • Jay S, Jones C, Slinn P, Wood C. 2007. Environmental impact assessment: retrospect and prospect. Environ Impact Assess Rev. 27:287–300. doi:10.1016/j.eiar.2006.12.001.
  • Joseph CTRB. 2013. Megaproject review in the megaprogram context: examining Alberta bitumen development. Burnaby, BC: Simon Fraser University.
  • Keeney RL. 1992. Value focused thinking: a path to creative decision-making. Cambridge, MA: Harvard University Press.
  • Kennett SA. 1999. Towards a new paradigm for cumulative effects management. Calgary: Canadian Institute for Resources Law.
  • Kennett SA. 2006. Integrated landscape management in Canada: getting from here to there. Calgary: Canadian Institute of Resources Law.
  • Kennett SA. 2007. Closing the performance gap: the challenge for cumulative effects management in Alberta's Athabasca oil sands region. Calgary: Canadian Institute of Resources Law.
  • Kennett SA, Alexander S, Duke D, Passelac-Ross MM, Quinn M, Stelfox B, Tyler M-E, Vlavianos N. 2006. Managing Alberta's energy futures at the landscape scale. Calgary: Institute for Sustainable Energy, Environment and Economy.
  • Kinnaman TC. 2011. The economic impact of shale gas extraction: a review of existing studies. Ecol Econ. 70:1243–1249. doi:10.1016/j.ecolecon.2011.02.005.
  • Knight N. 1990. Mega-project planning and economic welfare: a case study of British Columbia's North East Coal Project. Vancouver, BC: University of British Columbia.
  • Kok R, Benders RMJ, Moll HC. 2006. Measuring the environmental load of household consumption using some methods based on input-output energy analysis: a comparison of methods and a discussion of results. Energy Policy. 34:2744–2761. doi:10.1016/j.enpol.2005.04.006.
  • Laird FN. 1993. Participatory analysis, democracy, and technological decision making. Sci, Technol Human Values. 18:341–361. doi:10.1177/016224399301800305.
  • Land-Murphy B. 2004. Understanding Aboriginal participation in northern environmental assessments: the case of the Joint Review Panel for the Mackenzie Gas Project. Burnaby, BC: Simon Fraser University.
  • Lawrence DP. 2003. Environmental impact assessment: practical solutions to recurrent problems. Hoboken, NJ: John Wiley & Sons.
  • Leu WS, Williams WP, Bark AW. 1996a. Development of an environmental impact assessment evaluation model and its application: Taiwan case study. Environ Impact Assess Rev. 16:115–133. doi:10.1016/0195-9255(95)00107-7.
  • Leu WS, Williams WP, Bark AW. 1996b. Quality control mechanisms and environmental impact assessment effectiveness with special reference to the UK. Proj Apprais. 11:2–12. doi:10.1080/02688867.1996.9727013.
  • Linkov I, Satterstrom FK, Kiker G, Batchelor C, Bridges T, Ferguson E. 2006. From comparative risk assessment to multi-criteria decision analysis and adaptive management: recent developments and applications. Environ Int. 32:1072–1093. doi:10.1016/j.envint.2006.06.013.
  • Lovallo D, Kahneman D. 2003. Delusions of success. Harvard Business Rev. 07;81 p. 56–63.
  • Marshall R, Arts J, Morrison-Saunders A. 2005. International principles for best practice EIA follow-up. Impact Assess Proj Apprais. 23:175–181. doi:10.3152/147154605781765490.
  • McAllister DM. 1982. Evaluation in environmental planning. Cambridge, MA: MIT Press.
  • McDaniels TL, Gregory R, Fields D. 1999. Democratizing risk management: successful public involvement in local water management decisions. Risk Anal. 19:497–510.
  • Morgan MG, Henrion M. 1990. Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis. New York: Cambridge University Press.
  • Morgan RK. 2012. Environmental impact assessment: the state of the art. Impact Assess Proj Apprais. 30:5–14. doi:10.1080/14615517.2012.661557.
  • Morris PWG, Hough GH. 1987. The anatomy of major projects: a study of the reality of project management. Toronto, ON: John Wiley & Sons.
  • Morrison-Saunders A, Bailey J. 2003. Practitioner perspectives on the role of science in environmental impact assessment. Environ Manage. 31:683–695. doi:10.1007/s00267-003-2709-z.
  • Morrison-Saunders A, Baker J, Arts J. 2003. Lessons from practice: towards successful follow-up. Impact Assess Proj Apprais. 21:43–56. doi:10.3152/147154603781766527.
  • Muldoon P, Lucas AR, Gibson R, Pickfield P. 2009. An Introduction to environmental law and policy in Canada. Toronto: Emond Montgomery Publications.
  • Nash C, Pearce D, Stanley J. 1975. Criteria for evaluating project evaluation techniques. J Am Inst Planners. 41:83–89. doi:10.1080/01944367508977522.
  • Nikiforuk A. 1997. “The nasty game”: the failure of environmental assessment in Canada. Toronto: Walter & Duncan Gordon Foundation.
  • Noble B. 2010. Introduction to environmental impact assessment: a guide to principles and practice. Don Mills, ON: Oxford University Press.
  • Noble BF. 2009. Promise and dismay: the state of strategic environmental assessment systems and practices in Canada. Environ Impact Assess Rev. 29:66–75. doi:10.1016/j.eiar.2008.05.004.
  • OAGC. 2009. Report of the commissioner of the environment and sustainable development to the House of Commons. Chapter 1: Applying the Canadian Environmental Assessment Act, p. 33.
  • Otway H, Winterfeldt D. 1992. Expert judgment in risk analysis and management: process, context, and pitfalls. Risk Anal. 12:83–93. doi:10.1111/j.1539-6924.1992.tb01310.x.
  • Park PJ, Tahara K, Jeong IT, Lee KM. 2006. Comparison of four methods for integrating environmental and economic aspects in the end-of-life stage of a washing machine. Resour, Conserv Recycl. 48:71–85. doi:10.1016/j.resconrec.2006.01.001.
  • Passelac-Ross MM, Potes V. 2007. Crown consultation with Aboriginal peoples in oil sands development: is it adequate, is it legal? Calgary.
  • Priemus H. 2008. How to improve the early stages of decision-making on mega-projects. In: Priemus H, Flyvberg B, van Wee B, editors. Decision-making on mega-projects. Northampton, MA: Edward Elgar; p. 105–119.
  • Priemus H, Flyvbjerg B, Van Wee B, editors. 2008. Decision-making on mega-projects. Northampton, MA: Edward Elgar.
  • Rajvanshi A. 2008. Mitigation and compensation in environmental assessment. In: Fischer TB, Gazzola P, Jha-Thakur U, BelČáková I, Aschemann R, editors. Environmental assessment lecturer's handbook. Bratislava: Slovak University of Technology; p. 167–198.
  • Rickson RE, Western JS, Burdge RJ. 1990. Social impact assessment: knowledge and development. Environ Impact Assess Rev. 10:1–10. doi:10.1016/0195-9255(90)90002-H.
  • Rosenhead J. 2005. Problem structuring methods as an aid to multiple-stakeholder evaluation. In: Miller D, Potassini D, editors. Beyond benefit cost analysis-accounting for non-market values in planning evaluation. Burlington, VT: Ashgate; p. 163–171.
  • Ross MM. 2002. Legal and institutional responses to conflicts involving the oil and gas and forestry sectors. Calgary: Canadian Institute of Resources Law.
  • Rowe G, Frewer LJ. 2005. A typology of public engagement mechanisms. Sci, Technol Human Values. 30:251–290. doi:10.1177/0162243904271724.
  • Runhaar H, Driessen PPJ. 2007. What makes strategic environmental assessment successful environmental assessment? The role of context in the contribution of SEA to decision-making. Impact Assess Proj Apprais. 25:2–14. doi:10.3152/146155107X190613.
  • Sadar MH. 1996. Environmental impact assessment. 2nd ed. Ottawa, ON: Carleton University Press.
  • Sadler B. 1990. An evaluation of the Beaufort Sea Environmental Assessment Panel Review. Ottawa: Federal Environmental Assessment Review Office.
  • Sadler B. 1996. International Study of the Effectiveness of Environmental Assessment: final report – Environmental assessment in a changing world: evaluating practice to improve performance. Canadian Environmental Assessment Agency, International Association for Impact Assessment, Minister of Supply and Services Canada.
  • Samset K. 2003. Project evaluation: making investments succeed. Trondheim, Norway: Tapir Academic Press.
  • Senecal P, Goldsmith B, Conover S, Sadler B, Brown K. 1999. Principles of environmental impact assessment best practice. Fargo, ND: International Association for Impact Assessment and Institute of Environmental Assessment.
  • Shaffer M. 2010. Multiple account benefit-cost analysis: a practical guide for the systematic evaluation of project and policy alternatives. Toronto: University of Toronto Press.
  • Shiftan Y, Shefer D. 2009. Evaluating the impact of transport projects: lessons for other disciplines. Eval Program Plann. 32:311–314. doi:10.1016/j.evalprogplan.2009.08.003.
  • Siemiatycki M. 2010. Managing optimism biases in the delivery of large-infrastructure projects: a corporate performance benchmarking approach. Eur J Transp Infrastruct Res. 10:30–41.
  • Sinnette J. 2004. Accounting for megaproject dollars. Public Roads. 68:40–47.
  • Slotterback CS. 2008. Stakeholder involvement in NEPA scoping processes: evaluating practices and effects in transportation agencies. J Environ Plann Manage. 51:663–678. doi:10.1080/09640560802211060.
  • Sorel T. 2004. Great expectations. Public Roads. 68:10–15.
  • Sosa I, Keenan K. 2001. Impact benefit agreements between Aboriginal communities and mining companies: their use in Canada. Toronto: Canadian Environmental Law Association, Environmental Mining Council of British Columbia, CooperAcción; [cited 2008 Mar 9]. Available from: http://cela.ca/uploads/f8e04c51a8e04041f6f7faa046b03a7c/IBAeng.pdf.
  • Soumelis CG. 1977. Project evaluation methodologies and techniques. Paris: UNESCO.
  • Spaling H, Montes J, Sinclair J. 2011. Best practices for promoting participation and learning for sustainability: lessons from community-based environmental assessment in Kenya and Tanzania. J Env Assmt Pol. Mgmt. 13:343–366. doi:10.1142/S1464333211003924.
  • Storey K, Hamilton LC. 2003. Planning for the impacts of megaprojects: two North American examples. In: Rasmussen RO, Koroleva NE, editors. Social and environmental impacts in the north: methods in evaluation of socio-economic and environmental consequences of mining and energy production in the Arctic and Sub-Arctic. Dordrecht: Springer; p. 281–302.
  • Stough RR, Haynes KE. 1997. Megaproject impact assessment. In: Chatterji M, editor. Regional science: perspectives for the future. New York: St Martin's Press; p. 384–398.
  • Stratos. 2008. Improvements to the performance of the federal regulatory system: issues and research scoping workshop. Ottawa.
  • Susskind L, van der Wansem M, Ciccareli A. 2003. Mediating land use disputes in the United States: pros and cons. Environments 31, p. 39–58.
  • Tennøy A, Kværnner J, Gjerstad KI. 2006. Uncertainty in environmental impact assessment predictions: the need for better communication and more transparency. Impact Assess Project Appraisal. 24:45–56.
  • Tiruta-Barna L, Benetto E, Perrodin Y. 2007. Environmental impact and risk assessment of mineral wastes reuse strategies: review and critical analysis of approaches and applications. Resour Conserv Recycl. 50:351–379. doi:10.1016/j.resconrec.2007.01.009.
  • Tollefson C. 2003. Public participation and judicial review. In: Hughes EL, Lucas AR, Tilleman WA, editors. Environmental law and policy. 3rd ed. Toronto: Emond Montgomery Publications; p. 255–300.
  • UN. 2007. United Nations Declaration on the Rights of Indigenous Peoples.
  • Van Hinte T, Gunton TI, Day JC. 2007. Evaluation of the assessment process for major projects: a case study of oil and gas pipelines in Canada. Impact Assess Proj Apprais. 25:123–137. doi:10.3152/146155107X204491.
  • Van Wee B, Tavasszy LA. 2008. Ex-ante evaluation of mega-projects: methodological issues and cost-benefit analysis. In: Priemus H, Flyvberg B, van Wee B, editors. Decision-making on mega-projects. Northampton, MA: Edward Elgar; p. 40–65.
  • Vanclay F. 2003. International principles for social impact assessment. Impact Assess Proj Apprais. 21:5–12. doi:10.3152/147154603781766491.
  • Vanclay F, Esteves AM, Aucamp I, Franks D. 2015. Social impact assessment: guidance for assessing and managing the social impacts of projects. Fargo, ND: International Association for Impact Assessment. Available from: http://www.socialimpactassessment.com/documents/IAIA%202015%20Social%20Impact%20Assessment%20guidance%20document.pdf.
  • Vesely A. 2011. Theory and methodology of best practice research: a critical review of the current state. Cent Eur J Public Pol. 5:98–117.
  • Vickerman R. 2007. Cost–benefit analysis and large-scale infrastructure projects: state of the art and challenges. Environ Plann B. 34:598–610. doi:10.1068/b32112.
  • Vlavianos N. 2007. The legislative and regulatory framework for oil sands development in Alberta: a detailed review and analysis. Calgary: Canadian Institute of Resources Law.
  • Walters C. 1986. Adaptive management of renewable resources. New York: Macmillan Publishing Company.
  • Warner ML, Preston EH. 1974. A review of environmental impact assessment methodologies. Washington, DC: Prepared for US Environmental Protection Agency.
  • Warrack A. 1993. Megaproject decision making: lessons and strategies. Edmonton, AB: Western Centre for Economic Research, Faculty of Business, University of Alberta.
  • Wathern P, editor. 1988. Environmental impact assessment: theory and practice. Routledge.
  • WCD. 2000. Dams and development: a new framework for decision-making. London: The Report of the World Commission on Dams.
  • Wood C. 1995. Environmental impact assessment: a comparative review. New York: Wiley.
  • Wood C. 2003. Environmental impact assessment: a comparative review. Upper Saddle River, NJ: Prentice Hall.
  • Wozniak K. 2007. Evaluating the regulatory review and approval process for major projects: a case study of the Mackenzie Gas Project. Burnaby, BC: Simon Fraser University.
  • Zeremariam TK, Quinn N. 2007. An evaluation of environmental impact assessment in Eritrea. Impact Assess Project Appraisal. 25(1):53–63. doi:10.3152/146155107X190604.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.