677
Views
2
CrossRef citations to date
0
Altmetric
Articles

Scholar and practitioner views on science in environmental assessment

, &
Pages 516-528 | Received 30 May 2018, Accepted 27 Aug 2018, Published online: 21 Sep 2018

ABSTRACT

This study investigated the views of environmental assessment (EA) scholars and practitioners on science in EA. Fifty-six online survey responses were received, including 35 scholars and 21 practitioners. To supplement the survey data, we interviewed 13 Canadian EA practitioners. The study indicates that EA scholars were more dissatisfied with the quality of science in EA than the practitioners and their perceptions were found to be related to their understanding of science and underlying expectations of scientific practices in EA. This study confirms a gap between science inside and outside EA. Factors contributing to the barriers to filling this gap include EA stakeholders’ different understandings of and expectations for the quality of science, the role of scholars in EA, and the purpose and objectives of EA. These disagreements imply insufficient and/or ineffective communications among EA stakeholders, which should be addressed if a more collaborative arrangement is to be developed for improvements in the quality of science in EA.

Introduction

Environmental assessment (EA) is a widely acknowledged formal political decision-making and environmental planning tool in many countries (Sadler Citation1996; Noble Citation2015a). It can be traced back to the 1970s and was developed in the United States of America (USA) as a formal political response by the government to increasing public concerns about environmental impacts caused by human developments (Ortolano and Shepherd Citation1995; Fischer and Noble Citation2015). Due to its political roots, EA was criticized as a political decision-making tool used by proponents and politicians to justify project decisions that had been made already when the EA process was initiated (Noble Citation2015a). In addition, this history has led to the failure of EA decisions to meet requirements of rationalist models (Lee et al. Citation1995; Ortolano and Shepherd Citation1995). Political decisions include complicated trade-offs among various factors (e.g. economic benefits at the expense of environmental costs, and vested interests among different relevant parties and stakeholders). Therefore, a rational EA decision would not be practically made in the political arena (Cashmore et al. Citation2004).

There was a disagreement on the purposes of EA among major EA stakeholders which arose from the diversity of interests and objectives expressed by different participating groups (Enríquez-de-Salamanca Citation2018). Beanlands and Duinker (Citation1983) used four major EA stakeholders (i.e. government administrators, proponents, consultants, and research scientists) in the Canadian context as an illustration of the vested interests and objectives of different participating groups in the EA process. Fuller (Citation1999) echoed Beanlands and Duinker (Citation1983) in describing the interests among various EA stakeholders, and admitted that the expectations of different EA stakeholders are not complementary and sometimes even in conflict. There was an ultimate conflict embedded in the underlying purposes of EA from different stakeholders. The different objectives of stakeholders led to different perceived purposes of EA and also to different expectations on the quality of EA and scientific practices implemented in the EA process (Enríquez-de-Salamanca Citation2018). Finally, these different expectations were reflected in the range of quality of environmental impact statement (EIS) documents, a major product of the EA process (Fuller Citation1999).

EA was developed over 40 years ago, and the earliest EA-related best practice guidelines and EIS handbooks were published in the 1970s (e.g. Cheremisinoff and Morresi Citation1977). Strong scientific principles for EA practices have been established also since the 1970s (e.g. Holling Citation1978). As a result of more attention, these principles have been further strengthened since the 1980s (e.g. Caldwell et al. Citation1982; Beanlands and Duinker Citation1983). However, Mackinnon et al. (Citation2018) conducted a thorough literature review on scientific developments associated with EA since the 1970s and identified a large gap between science inside EA and science outside EA, which means that scientific quality in EA still requires improvement.

The concepts of science inside EA and outside EA were introduced by Greig and Duinker (Citation2011) when they were trying to demonstrate a mutually supportive cycle between research scientists and proponents for improving the implementation of science in the EA process. Greig and Duinker (Citation2011) categorized science into two groups: science outside EA refers to reliable ecological effects knowledge that is created by research scientists outside EA (e.g. Lester et al. Citation2010). Science inside EA refers to the ecological knowledge that is used and applied in EA impact predictive models by EA practitioners operating inside EA (e.g. Dipper Citation1998; Jones and Fischer Citation2016).

In the early 1980s, Beanlands and Duinker (Citation1983) pointed out a necessary step to transform the adversarial relationship between scientists and EA practitioners into a more collaborative one for improving the overall quality of science in EA. In later studies, the importance of a collaborative relationship between scientists and other stakeholders (e.g. policy-makers, citizens, and project proponents) has been identified in terms of generating a well-informed resource management decision and achieving a goal of sustainable natural resource management (Rogers Citation2006; Roux et al. Citation2006; Gibbons et al. Citation2008; Ryder et al. Citation2010). Barriers to achieving such a relationship have also been discussed in the literature, such as cultural differences, education background differences, lack of incentives for collaboration, and lack of opportunities for communication (Briggs Citation2006; Roux et al. Citation2006; Ryder et al. Citation2010). These barriers lead to fundamentally conflicting values, beliefs, and understandings of science, and consequently generate difficulties for these stakeholders to mutually understand each other (Briggs Citation2006; Ryder et al. Citation2010).

Greig and Duinker (Citation2011) emphasized that an agreement among EA stakeholders as to the need for improved quality of science in EA is a necessary condition. However, EA stakeholders seem to have different perceptions on the quality of EA performance. On the one hand, the quality of EA practices is always dissatisfying to EA reviewers. The EIS is a key product of EA systems and it has been commonly used to reflect the quality of EA practices performed by EA practitioners (e.g. proponents and consultants hired by proponents). Reviews of EISs indicate that there were discrepancies between what should be done as described or guided in the literature and what had been done as stated in EISs in Europe (Glasson et al. Citation1997; Thompson et al. Citation1997; Barker and Wood Citation1999; Gray and Edwards-Jones Citation2003; Canelas et al. Citation2005), in Canada (Duinker Citation2013), in the USA (Tzoumis Citation2007), and in Bangladesh (Kabir and Momtaz Citation2012). In contrast, in a Dutch survey, EA practitioners were generally satisfied with the quality of EA reports (Arts et al. Citation2012). In addition, EA practitioners, mainly consultants, were generally satisfied with the quality of scientific work done in EA in an Australian survey (Morrison-Saunders and Bailey Citation2003) and in a Finnish survey as well (Jalava et al. Citation2010). Even though these EA practitioners indicated that they were dissatisfied with the importance of science placed at different stages in an EA system, when they were asked to self-evaluate their performance, their satisfaction levels were generally very high (Morrison-Saunders and Bailey Citation2003).

It is essential to understand the perception of various EA stakeholders on the quality of science embedded in EA practices. Effective EA decision-making needs scientific support (Morrison-Saunders and Sadler Citation2010); effective scientific support needs a collaboratively harmonious relationship between EA scholars and practitioners and also a consensus towards the perceived satisfaction levels with scientific work in EA practices (Greig and Duinker Citation2011). There is a lot of literature about theoretical studies and empirical studies regarding the role of science and the quality of scientific practices in EA. However, there have been only a few studies investigating the perceptions of EA practitioners on the scientific practices in EA (e.g. Caldwell et al. Citation1982; Sadler Citation1996; Morrison-Saunders and Bailey Citation2003; Jalava et al. Citation2010), and no literature has been found that investigate the perceptions of both EA practitioners and scholars and compare the differences (if any) between them. This study provides data about EA scholars’ and practitioners’ perspectives on the degree to which science in EA needs improvement, and the factors that have the potential for improving the state of scientific work within EA. Our research questions focus on:

  1. What are the levels of satisfaction of EA scholars (i.e. research scientists) and practitioners (i.e. government scientists, consultants, EA administrators, and non-governmental organizations (NGOs)) with the quality of science embedded in an EA process?

  2. What is the perceived status of science in the EA process by scholars and practitioners?

  3. What factors have influenced the quality of scientific practices in EA?

Methods

Online surveys

Surveys are a frequently used instrument in other similar EA inquiries (e.g. Sadler Citation1996; Morrison-Saunders and Sadler Citation2010). Opinio (ObjectPlanet Citation2018), online survey software, was selected for this study based on the considerations of low cost and data security. Survey questions included eight demographic questions investigating respondents’ EA-related working experience, teaching experience, roles played in EA, ages, education, genders, countries that they have most worked in, and contributions to EA-related literature, and eight other questions regarding the perceptions of survey respondents on (1) the quality of science at the stages of the EA process; (2) factors influencing the quality of science in EA; (3) the power of various EA stakeholders; (4) the contributions of scientific support from the scholars’ community to the EA process; and (5) the status of science in EA practices. Original survey questions can be found in the on-line supplementary material of the article. In our survey introduction, we instructed potential survey respondents with the following text:

‘We acknowledge a wide range of interpretations of what science is in general, and what scientific approaches people think are appropriate in EIA. For our purposes, it is sufficient for you to answer the survey with your own interpretation of science in the context of EIA. We also acknowledge the essential role of traditional knowledge, especially among Aboriginal peoples, in providing vital insights in the process of EIA, but this survey focusses on formal scientific approaches and methods’.

We avoided offering our own conception of what is science in the context of EA, and what scientific approaches we think are the most suitable in EA, on the expectation that it might lead survey respondents to challenge us based on their own views and divert their response energies away from the questions we posed. The five-point Likert scale was used to investigate the respondents’ satisfaction with the quality of science at the stages of the EA process (i.e. ‘very dissatisfied’, ‘dissatisfied’, ‘neither satisfied nor dissatisfied’, ‘satisfied’, and ‘very satisfied’). A three-point scale was used to investigate the respondents’ perception of the importance of factors influencing the quality of science in EA, and the power of various EA stakeholders in the EA process (i.e. low, medium, and high importance/power/influence). ‘I do not know’ was also provided as an additional option in the survey questions. These survey questions helped to contribute to the formulation of generalizations about the trends and themes regarding the perceived quality of science applied in the EA process among the key stakeholders in the EA community.

Invitations to engage with the online surveys were distributed in two ways: (1) the International Association for Impact Assessment (IAIA) network via its monthly e-news and the normal distribution list (i.e. IAIA Connect) and (2) email following a literature search for EA-related journal paper authors (i.e. Environmental Impact Assessment Review, and Impact Assessment and Project Appraisal). IAIA is a well-known leading global network in the field of impact assessment, and the IAIA network was used by previous research studies to investigate EA professional perspectives on EA inquiries (e.g. Sadler Citation1996; Morrison-Saunders and Sadler Citation2010). Because the majority of IAIA network members are EA practitioners, in order to balance the ratio of EA scholars and practitioners in survey results, around 200 journal paper authors were selected from the journal papers that were published from 2014 to 2016 in Environmental Impact Assessment Review as potential survey respondents. All surveys were anonymous, and the instrument was kept open from May 2016 to December 2016. By the end of 2016, 56 survey respondents completed the survey, including 35 EA scholars and 21 EA practitioners. Raw survey responses were downloaded from Opinio in the format of Microsoft Excel (MS Excel) (Microsoft Citation2018) and then were categorized based on their self-identified roles into two groups – ‘EA scholars’ and ‘EA practitioners’.

Minitab (Minitab Inc Citation2018) was used for doing the Mann–Whitney U test, a non-parametric statistical analysis method, to deal with the ordinal data generated from Likert-scale questions (De Winter and Dodou Citation2010). Statistical significance values were calculated and the difference between the satisfaction levels of EA scholars and practitioners with the quality of science in EA was identified. The statistical p-value significance thresholds selected for this study were 0.1 and 0.05. As for the survey data regarding the attitudes of respondents to the quality of science in EA, ‘satisfied’ and ‘very satisfied’ were merged, and ‘dissatisfied’ and ‘very dissatisfied’ were also merged. The number of respondents who expressed ‘I do not know’ or did not respond to a specific survey question were subtracted from the calculation of the percentage of respondents’ satisfaction levels. NVivo (QSR International Pty Ltd Citation2018) was used for dealing with qualitative data from open-ended survey questions and also additional comments provided by respondents to explain their answers to Likert-scale questions. These qualitative data were coded, categorized, analyzed, and distilled using an a posteriori coding scheme.

Interviews

Due to the small sample size of survey responses received, interviews were considered as a complementary instrument to enrich the survey results. Interview questions included six demographic questions which were highly similar to the demographic questions included in the online surveys, and eight other questions regarding the interviewee perspectives on the scientific practices in EA. Thirty-six Canadian EA professionals were purposively selected for interviews based on the personal knowledge of the authors. Thirteen out of 36 EA professionals agreed to be interviewed. One interviewee acts as both a scholar and a practitioner, and the others act purely as practitioners. None of the potential interviewees acting purely as EA scholars agreed to be interviewed. Therefore, interview data were not categorized into two groups as had been done for survey results.

Due to the diverse locations of purposively selected EA professionals in Canada, the interviews were conducted individually either over the phone or in person at a location of the interviewees’ convenience. All interviews took between 40 and 120 min, depending on the interviewees’ responses. The interviews were audio-recorded and then transcribed verbatim by the authors. Similarly, MS Excel (Microsoft Citation2018) was used to create figures to represent interviewees’ demographic information. NVivo (QSR International Pty Ltd Citation2018) was used for dealing with qualitative data from interview questions. An inductive coding method was used for interview data first, and then the resulting coding nodes were compared with the coding nodes generated from survey results to identify whether interview data and survey results are complementary and consistent or conflicting.

Results

Survey demographic data

Thirty-five survey respondents were identified as EA scholars. Most of them had a medium to high level of teaching experience (83%) and a medium to high level of contribution to the EA-related literature (80%). Twenty-one survey respondents were identified as EA practitioners, and they generally had a comparatively lower level of teaching experience and literature-writing experience than the scholars. The roles of the EA practitioners included consultants (71%), EA regulators (19%), NGOs (5%), and other (5%). Both pools of survey respondents had a generally high level of EA experience: 67% of scholars and 84% of practitioners had more than 5 years of EA experience. Most scholars had doctorate degrees or had even worked as post-doctoral fellows (73%). However, most practitioners had Master’s degrees (57%) and some had doctorate degrees (29%).The countries and/or continents that the survey respondents mostly work in on EA projects are varied (). Interestingly, 35% of EA scholars answered survey questions from a biophysical science point of view, and 65% from a social science point of view; however, 81% of EA practitioners answered from a biophysical science point of view, and only 19% from a social science point of view.

Figure 1. Countries/continents that survey respondents work in on EA projects.

Figure 1. Countries/continents that survey respondents work in on EA projects.

Survey results

Both scholars and practitioners were highly dissatisfied with cumulative effects assessment (82% and 80%, respectively). Perceived satisfaction levels for other EA components were highly variable for both scholars and practitioners (). For example, regarding species at risk, EA scholars generally expressed higher satisfaction than EA practitioners. However, this observation was not statistically significant (). Scholars were statistically significantly more unsatisfied with the quality of science used in the identification of valued ecosystem components (VECs) compared to practitioners (p = 0.0299, alpha = 0.05). Scholars were slightly more unsatisfied with approaches used for impact prediction compared to practitioners (p = 0.0974, alpha = 0.1). No statistically significant differences were observed in other individual EA components (), indicating there was no difference in perceived satisfaction levels with these individual EA components.

Table 1. Satisfaction levels (%) expressed by EA scholars and practitioners (i.e. satisfied, neither satisfied nor dissatisfied, and dissatisfied) with EA components. Bold numbers in the table refer to the relatively higher percentages of survey respondents. The topics were grouped based on a general understanding of their frequencies included in an EA system.

Table 2. Statistical significance in differences between satisfaction levels expressed by EA scholars and practitioners with EA components. Selected confidence level is 90% and 95% (alpha = 0.1 and 0.05).

At an aggregated level, EA scholars expressed greater dissatisfaction with the quality of science used in most EA components compared to EA practitioners, which was confirmed using a Mann–Whitney U test (p = 0.0092, alpha = 0.05). The six components in group 1 are the most common ones included in an EA process (see Beanlands and Duinker Citation1983), because they can be found in almost all EA systems, and the six components in group 2 are less common than those in group 1. Specifically, EA scholars were significantly more unsatisfied with those most common six EA components than practitioners (p = 0.0008, alpha = 0.05) ().

Reasons for the satisfaction with these EA components can be categorized into two groups. First, survey respondents were satisfied with what had been done, such as simple cause–effect approaches which were used to identify VECs. Second, their satisfaction was based on the current level of proponents’ capacity and availability of analytical tools.

Reasons for the dissatisfaction with EA components can be summarized into 10 groups:

  1. non-existent EA components;

  2. inadequate implementation;

  3. very weak scientific foundation;

  4. an overly narrow, unclearly or incorrectly defined scope and focus;

  5. ignored consideration of uncertainties;

  6. big gaps between what should be done based on scientific literature and what has been done in real EA practices;

  7. insufficient baseline data collection;

  8. ambiguous decision-making processes;

  9. political barriers; and

  10. limited resource investments (e.g. time, human, and financial resources).

Fourteen factors were included in the survey to investigate the perceived importance levels ascribed by the EA scholars and practitioners; these were categorized into three groups based on survey responses (). The first group includes seven factors that were considered highly important to both scholars and practitioners. The importance of factors in the second group was considered to be of medium importance by most EA scholars and practitioners. The third group consists of five factors that were considered more important by EA scholars than practitioners. Using a Mann–Whitney U test on single factors, EA practitioners rated the importance of time availability slightly higher than scholars, and EA scholars rated the importance of the first four factors in group 3 higher than practitioners (). Responses for the first three factors in group 3 reflect different attitudes of scholars and practitioners towards the importance of scientific community participation in the EA process, and statistical analysis on the grouped responses on these three factors indicates that scholars felt their participation in EA was more important than how EA practitioners thought about it (p = 0.0071, alpha = 0.05). Scholars explained that their participation could bring more scientific considerations into project decisions; however, practitioners considered that EA is not scientific research and indicated that cooperation between scholars and practitioners can be helpful only when scholars can allow/accept different value systems to be freely discussed. Interestingly, compared with scholars, a higher proportion of practitioners expressed a dissatisfaction with the contribution of the scientific community to EA ().

Table 3. Importance levels (%) and statistical significance in the differences between the importance levels expressed by EA scholars and practitioners (i.e. low, medium, and high importance) with influential factors in influencing quality of science in EA. Factors were grouped based on the level of importance perceived by scholars and practitioners. Selected confidence level is 90% and 95% (alpha = 0.1 and 0.05).

Figure 2. Satisfaction levels (%) by scholars and practitioners with the contribution of the scholarly scientific community to the EA process. Statistical differences indicated that EA scholars and practitioners have essentially the same satisfaction levels (p = 0.278, alpha = 0.1).

Figure 2. Satisfaction levels (%) by scholars and practitioners with the contribution of the scholarly scientific community to the EA process. Statistical differences indicated that EA scholars and practitioners have essentially the same satisfaction levels (p = 0.278, alpha = 0.1).

Additional factors were reported by scholars and practitioners. For example, scholars emphasized that a clear definition of science and a clear guideline directed by regulators in terms of the methods for conducting scientific analysis are also highly important in influencing the quality of science. Practitioners also listed the following factors:

  • (1) integration of science and traditional, local cultural, and indigenous knowledge;

  • (2) independence of EA practitioners;

  • (3) effectiveness of communication;

  • (4) community engagement;

  • (5) political interference;

  • (6) methodological guidelines;

  • (7) opportunities for practitioners to learn the theoretical foundation of EA;

  • (8) scientific understanding of cause–effect; and

  • (9) accordance with dominant value systems.

Both scholars and practitioners indicated that the contribution of science is compromised by other factors such as traditional knowledge and public interest in the project decision-making process. Weak synergy between scholars and government regulators/decision-makers also results in limited scientific insights embedded in EA regulations and government guidance documents. Referring to the ways that science should be strengthened so that it can improve the effectiveness and efficiency of the contribution to project decision-making, the scholars proposed the following priorities:

  • (1) more consideration of uncertainty;

  • (2) better training of EA practitioners;

  • (3) better understanding of the gap between scientific problems in academics and real EA problems in practice;

  • (4) increased funding;

  • (5) improved participatory processes;

  • (6) higher scientific standards required by permitting authorities to practitioners;

  • (7) greater levels of independence of scientists and science;

  • (8) better collaboration between scholars and practitioners;

  • (9) improved involvement of scholars in the EA process; and

  • (10) better adaptive learning from the EA process.

The practitioners suggested the following ways:

  • (1) a better application of precautionary principles in decision-making processes;

  • (2) more time and funding availability;

  • (3) more consideration of uncertainty;

  • (4) stronger partnerships between stakeholders;

  • (5) strengthened scientific standards by regulators;

  • (6) more-transparent decision-making processes;

  • (7) earlier participation of scholars in the EA process;

  • (8) scientists with more field experience and more collaborative and humble attitudes; and

  • (9) continuous refining of the best available science.

For most stakeholders, EA scholars and practitioners expressed the same level of power and influence on their role in influencing the EA process (), and there was no statistical difference between merged responses to perceived power and influence of all stakeholders by both pools (p = 0.8957, alpha = 0.1). For observations on single stakeholders, those in group 1 were considered highly powerful by both pools of EA scholars and practitioners, and the power and influence of stakeholders in group 2 were considered in the medium level by both pools of respondents (). Scholars ascribed a higher level of power to proponents than did practitioners (p = 0.0113, alpha = 0.05), and there was no statistical significance found in the difference of the two pools of respondents’ attitudes about other stakeholders. For group 3, practitioners considered scholars and aboriginal groups to have a more powerful role in EA than what scholars believed they had, and the difference between the attitudes of the scholars and practitioners towards the importance of scholars was confirmed by statistical analysis (p = 0.0875, alpha = 0.1).

Table 4. Power and influence levels (%) perceived by EA scholars and practitioners (i.e. low, medium, and high power and influence) in the generic EA process. Stakeholders were grouped based on the level of influence perceived by scholars and practitioners. The highest percentage of each stakeholder was italic in the table.

Interview demographic data

Twelve interviewees were identified as EA practitioners, including consultants, proponents, EA regulators, government reviewers, and NGOs, and one interviewee was identified as both EA practitioner and scholar. Most interviewees were not experienced EA-related teachers or professors or experienced literature contributors, but were experienced EA professionals due to their very high level of EA-related work experience (). All had a Canadian EA background. Interestingly, no interviewee answered questions from a social science perspective. Most indicated that they answered from a mixed perspective ().

Figure 3. Demographic information about interviewees.

Figure 3. Demographic information about interviewees.

Interview results

The factors behind interviewee satisfaction with the quality of science in EA echoed the factors offered by survey respondents, and the reasons why interviewees were dissatisfied can also be categorized into the same 10 groups as those presented in survey responses.

For the status of EA, interviewees indicated that current EA is about mitigating undesirable outcomes and has become standardized about moving a project through a regulatory process rather than a scientific process. They said this standardized process leads to a lack of creativity in finding solutions for EA issues, and inefficient use of proponents’ time and money. In addition, some important stakeholders are marginalized in this kind of standardized regulatory process due to their low political power. EA is considered as a consulting and negotiating process among various stakeholders rather than an evaluation process based on scientific evidence. There has been disagreement on EA purposes between various stakeholders; therefore, they would use EA as a tool for their own purposes. For example, proponents want to get their projects approved, so they want to understand the concerns of other stakeholders about their projects to guarantee their projects can be approved. Indigenous groups and land owners want to protect their properties; therefore, they would treat EA as a way of opposing proposed projects. Those who are concerned about climate change would treat EA as a way of reminding the government of the importance of climate change and how a proposed project would contribute to an increase in emissions of greenhouse gases. Bringing these different stories to any public hearings by stakeholders makes EA complicated.

For the status of science in EA, some interviewees indicated that there is a generally weak scientific foundation for EA, which results in a low-quality discourse. The main factors contributing to a weak scientific foundation for preparing EISs include: unqualified EA practitioners, time and cost constraints on consultants by proponents, a low expectation of regulators about the quality of science, and low participation of scholars and government scientists in the EIS preparation phase. There are two other scientific issues flagged by interviewees. First, science is misused when scientific data are purposively selected to support stakeholders’ arguments. For example, on one hand, scientific uncertainty is used by intervenors to block the project, and this is not an appropriate way of helping a project to be improved. Academic scientists often reveal support for the positions of specific intervenors. If so doing, they often engage in a narrow selection of evidence for those positions. On the other hand, proponents have been brought incomplete information and have been selective in the analysis in the EA report to get their projects approved. Second, there is a lack of independent science. Governments do not have sufficient budgets to hire scientists to do EA. Scientific work is generally done by professional people hired by proponents, and these professionals consequently are not independent to speak freely about negative project impacts. In addition, the current EA structure does not demand and encourage independent science.

Some adverse tensions were identified in the relationships between various EA stakeholders. Consultants expressed satisfaction with the general quality of science because they believe that the level of science expected by government reviewers is reached. However, government reviewers indicated that they are not allowed to speak freely on scientific issues and negative environmental issues under political pressure. Regulators expressed satisfaction with the amount and quality of scientific information they receive and post to the public record. However, a low acceptance level of science by regulators was criticized by other stakeholders as a reason why scientific quality in EA was unsatisfactory. Negative impacts that are flagged by government reviewers and NGOs cannot sway the decision of politicians and the Ministers in charge because political will makes the weight of economic benefits and concerns embedded in EA decisions higher than that of environmental concerns.

Discussion

Survey results indicate that EA practitioners were generally satisfied with the quality of science in EA. This finding is consistent with results of two other surveys with respect to practitioner perspectives on the quality of science in EA – one at an international scale (Sadler Citation1996), and another in Western Australia (Morrison-Saunders and Bailey Citation2003).

EA practitioners expressed a higher satisfaction level in the current survey responses regarding the quality of science in EA than EA scholars. Satisfaction reasons given by survey respondents can be generally summarized as that survey respondents expressed satisfaction based on what they have experienced as achievements in the scientific practices in EA. Interestingly, when EA practitioners expressed satisfaction levels with the quality of scientific practices in EA, they were self-evaluating themselves. EA practitioners conduct EAs; therefore, in this survey, they were self-evaluators. In contrast, EA scholars acted as external evaluators for scientific practices in EA. Therefore, EA scholars may be more critical about evaluation of the scientific practices in EA than the EA practitioners. Based on this logic, it would not be surprising that EA practitioners showed a higher satisfaction level than EA scholars. The same logic was also used by Greig and Duinker (Citation2011) to explain their understanding of survey results presented in Morrison-Saunders and Bailey (Citation2003) regarding a high satisfaction level with the quality of science by EA practitioners in Western Australia.

Another explanation for differences between satisfaction levels of EA scholars and practitioners is that EA scholars and practitioners have different understandings of science and this distinction results in their different expectations of the quality of science implemented in EA. Non-scientists consider science as a process of data collection; therefore, when they discussed the quality of science, they generally thought about how much data they have collected, and how good the quality of data collection is (Bisbal Citation2002; Briggs Citation2006; Sullivan et al. Citation2006; Ryder et al. Citation2010; Green and Garmestani Citation2012). However, scientists consider science as a more rigorous analysis process, starting with an identified research problem, and then based on the defined scope of the research question to determine the scope of data collection and to select an appropriate research method to collect and analyze data. A thorough data analysis would be applied after data collection and finally a conclusion would be drawn based on a logic flow embedded in the analysis (Bisbal Citation2002; Briggs Citation2006; Sullivan et al. Citation2006; Ryder et al. Citation2010; Green and Garmestani Citation2012). These different understandings of science embrace different values and beliefs, and sometimes these values are in conflict in the environmental decision-making policy arena (Briggs Citation2006; Ryder et al. Citation2010).

Science is not easily practiced in EA and the following factors have contributed to difficulties of its implementation: (1) a low expectation or an unclear direction from regulatory administrators as to the quality of science; (2) limitations in human, time, and financial resources; and (3) adversarial political tensions between stakeholders. A low expectation or unclear direction from regulators may be understood from two perspectives. First, regulators have a solid understanding of science, but they put a low-quality threshold on the requirements on the scientific practices in EIS guidelines due to some reality constraints such as staff shortages, loss of experienced staff, and financial insufficiency (Morrison-Saunders and Bailey Citation2009). Second, regulators may not have a deep understanding of science, so they have no idea about what an appropriate quality threshold on the scientific practices in EIS guidelines should be like. As a result, they use vague and ambiguous language to write EIS guidelines. A low expectation or unclear direction from regulators can also reflect a root of most of the dissatisfaction reasons explained by survey respondents, such as non-existent EA components, inadequate implementation, an overly narrow scope and focus, unclearly or incorrectly defined scope, big gaps between what should be done based on scientific literature and what has been done in real EA practices, and insufficient baseline data collection. Because regulators do not specify explicit requirements required in the scientific dimensions of EA, practitioners cannot get a clear understanding about the expectations from regulators regarding the quality of science. At the same time, there are also time and financial constraints from the proponents’ side, so it would not be surprising that some scientific quality issues as listed above would be identified and would generate dissatisfaction among EA stakeholders.

The power of regulators has been demonstrated in influencing the environmental performance of proponents in the literature, and proponents also have an incentive to be proactive and/or interactive towards environmental approvals regulation. Specifically, Morrison-Saunders et al. (Citation2001) identified that the expectations of EA regulators were one of the main influencing drivers to the improving quality of science in EA, and regulator pressure was the main incentive and stress on proponents to improve the quality of EISs. In addition, an administrative control mechanism operated by regulators was identified as one of the main drivers to the positive changes in the Brazilian EISs in the period between 1987 and 2010 (Landim and Sánchez Citation2012). The results of these two studies echoed the study of Annandale (Citation2000), which indicated the power of regulators in influencing the response of the companies dealing with mines to environmental approvals regulations. In a later study, Annandale et al. (Citation2004) identified regulator pressure as a determinant of the environmental performance of development companies. Additionally, Annandale and Taplin (Citation2003) found that development companies treat environmental approvals regulation as an important opportunity to improve their project designs so as to avoid possible negative environmental outcomes caused by poor project designs. Therefore, the expectations of regulators may be considered by proponents as the ceiling of the scientific quality in EA. Sometimes there is a lack of consensus about the clarity of EA regulations, and different stakeholders may have different understanding of these regulations (Arts et al. Citation2012). If the quality of science in EA is expected to be improved, then improving regulators’ requirements or expectations would be one of the direct ways. In some jurisdictions, this should be possible through regulators’ interpretations of scientific requirements on proponents, and even their informal negotiations about how to interpret statutory and regulatory requirements.

The importance of resources (i.e. human, temporal, and financial) was frequently linked with the improvement in the scientific dimension of EA by the survey and interview participants, and these links can be interpreted from the following four perspectives. First, in the EIS preparation phase, well-qualified practitioners with a high level of expertise and sufficient temporal and financial resources are needed for collecting sufficient baseline data and applying appropriate scientific methods in predicting possible impacts and determining impact significance. Second, at the public participatory stage, sufficient staff, temporal, and financial resources are also important to support the participation of the public and other stakeholders (e.g. NGOs and aboriginal groups) in the EA discourse. Third, in the post-EA phase, sufficient resources are important for governments to conduct monitoring and follow-up and to post the monitoring and follow-up reports to the public record for adaptive learning purposes. Fourth, independent and sufficient research funds are an important factor for creating independent science.

These results overlap with the results of Morrison-Saunders et al. (Citation2001) who pointed out that sufficient time and financial resources were an important determinant to the level of science in EA, and the insufficiency of research funds provided by proponents was considered as a major factor influencing the level of science applied in EA. Additionally, Morrison-Saunders and Bailey (Citation2009) demonstrated that sufficient human resources play an important role in influencing the quality and effectiveness of EA practices. Staff shortages in regulatory agencies caused by a recent resource boom in Australia would reduce the capacity of regulators, so as to hardly meet the requirements and expectations of other stakeholders. The relationships between regulators and other stakeholders may also be strained (Morrison-Saunders and Bailey Citation2009). In addition, Jones and Fischer (Citation2016) concluded that lack of funds is one of the most pressing barriers to EA follow-up uptake. Therefore, sufficient human, temporal, and financial resources are needed to improve scientific practices and effectiveness of EA.

Based on the survey respondents and interviewees, adversarial political tensions were identified among EA stakeholders, which reflect weaknesses in effective communication between stakeholders in terms of their different expectations and challenges encountered in their different roles. And these communication gaps among stakeholders generate different perceived satisfaction levels with the quality of science in EA. Noble (Citation2015b) indicated that different stakeholders have different values, demands, and expectations in the context of cumulative effects assessment, which may lead to very adversarial social-political EA environment. Better recognition and understanding of different stakeholders’ values may help to improve the effectiveness of EA (Cape et al. Citation2018). That being said, an improvement in effective communications among stakeholders can promote their mutual understandings, thereby relieving adversarial political tensions and creating a foundation for a collaborative relationship between EA stakeholders. Whether a collaborative relationship can indeed emerge will depend largely on the cultural and governance contexts within which a specific national or sub-national EA system operates (see Monteiro et al. Citation2018).

The literature emphasizing the importance of changing from adversarial relationships among EA stakeholders to a collaborative way for improving the quality of science can be traced back to the 1980s (i.e. Caldwell et al. Citation1982; Beanlands and Duinker Citation1983). The importance of creating interdisciplinary teams in an adaptive learning management system was elaborated by Ryder et al. (Citation2010). Mackinnon et al. (Citation2018) demonstrated the gaps between science inside EA and science outside EA and emphasized that improvement in science inside EA would depend on more collaborative relationships and arrangements among EA stakeholders. However, how to develop and/or improve this kind of collaborative arrangement among EA stakeholders has not drawn enough attention.

Morrison-Saunders and Bailey (Citation2009) shared a positive initiative, the Partnering Agreement between regulators and consultants in Western Australia, and demonstrated benefits generated from the resulting cooperative relationship between these two EA stakeholders. However, it may be easier to promote collaborative relationships between two EA stakeholders than more than three stakeholders, especially when these stakeholders have conflicting values and expectations about the quality of science and effectiveness of EA. Even though regulators and consultants play a totally different role in the EA process, based on our Canadian experience and what is reported by Morrison-Saunders and Bailey (Citation2009), they generally have similar EA-related educational backgrounds and working environments. Therefore, it may be more challenging to motivate two stakeholders with different EA-related training background to cooperate and collaborate.

The level of willingness by EA stakeholders to cooperate and collaborate to improve their working relationships and EA performance is key for initiating such a collaborative arrangement (Morrison-Saunders and Bailey Citation2009). However, based on survey responses of this study, EA practitioners gave participation of scientific communities in the EA process less importance than EA scholars, which reflects, to some degree, that EA practitioners are unwilling to work collaboratively with scholars. Some survey comments even explained that in reality, EA scholars and practitioners mutually avoid each other due to their different value systems. Additionally, from a scholar’s perspective, participation in EA practice-related activities is voluntary and a scholar’s contribution to these practices is rarely highly valued in the academic context. Scholars thus lack incentives (beyond possible consultancy fees) to participate in EA activities. These challenges obviously should be overcome for developing more collaborative and participatory working relationships among stakeholders to improve the quality of science in EA.

Greig and Duinker (Citation2011) showed a potential collaborative relationship between EA practitioners and scholars for improving the quality of science in EA and pointed out that one condition must be reached to make this kind of collaborative arrangement work – all stakeholders should reach a consensus about the status of science in EA and agree that the quality of science needs to be improved for delivering better EA outcomes. However, based on survey responses, EA scholars and practitioners have different perceptions on the quality of science in EA, and these different satisfaction levels also indicate that at least EA scholars and practitioners do not exhibit a consensus about the status of science in EA; consequently, their willingness to change the current EA situation remains unchanged.

Conclusions

This study indicates that EA scholars were more dissatisfied with the quality of science in EA than EA practitioners. When survey respondents and interviewees explained their satisfaction levels, their perceptions were generally related to their underlying expectations on the scientific practices in EA and their understanding of science in EA. Were we to do the study again, we would seriously consider devising a means of interrogating people’s understandings of science and the roles it plays in EA. This study implies that EA practitioners play an influential role in determining the quality of science in EA and EA scholars play a limited role in influencing the quality of science in EA. However, the EA practitioners did not show a willingness to change this situation. Also, this study implies that EA scholars and practitioners do not exhibit a consensus about the status of science in EA. This disagreement may be a barrier to creating mutual understandings and effective communications among EA stakeholders, thereby hindering the building of a collaborative relationship between EA scholars and practitioners for improving the quality of science in EA (Greig and Duinker Citation2011).

This study aligns with the conclusion made by Mackinnon et al. (Citation2018) that there are large gaps between science inside EA and science outside EA. Factors contributing to the barriers to filling the gap include EA stakeholders’ different understandings of and expectations on the quality of science, the role of scholars in EA, and the purpose and objectives of EA. These disagreements imply insufficient and/or ineffective communications among EA stakeholders, which should be addressed if a more collaborative arrangement is to be built for a better quality of science in EA.

As for the limitations, due to time constraints, this study collected 56 online survey responses (35 EA scholars and 21 EA practitioners) and interviewed 13 Canadian EA professionals. Therefore, the small sample size of online surveys is not enough for a good statistical representation of all EA scholars and practitioners throughout the world. In addition, the countries and/or continents that the survey respondents mostly work in on EA projects are different (), which may also contribute to their different perceptions of the quality of science in EA due to different EA conditions in these countries and/or continents. All interviewees were selected from Canada, and the EA-related working background of all interviewees is mostly in Canada. These factors may generate bias in our findings.

Future research on diverse views about science in EA is suggested to focus on (1) how to promote mutual understandings and effective communications among EA stakeholders so as to create a solid foundation for their potential collaborative working relationships; (2) how to promote the participation of EA scholars in the EA process in order to improve the scientific foundation for EA and generate a high-quality EA discourse; (3) how to improve the expectations of regulators on the quality of science in EA; and (4) how to mediate adverse tensions in the relationships of various EA stakeholders, especially when they have conflictive values, beliefs, and purposes.

Acknowledgments

We would like to thank the International Association for Impact Assessment (IAIA) for distributing online survey links through its normal distribution list and monthly e-news.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Annandale D. 2000. Mining company approaches to environmental approvals regulation: a survey of senior environmental managers in Canadian firms. Resour Policy. 26:51–59.
  • Annandale D, Morrison‐Saunders A, Bouma G. 2004. The impact of voluntary environmental protection instruments on company environmental performance. Bus Strategy Environ. 13(1):1–12.
  • Annandale D, Taplin R. 2003. Is environmental impact assessment regulation a ‘burden’ to private firms? Environ Impact Assess Rev. 23(3):383–397.
  • Arts J, Runhaar HAC, Fischer TB, Jha-Thakur U, van Laerhoven F, Driessen PPJ, Onyango V. 2012. The effectiveness of EIA as an instrument for environmental governance – a comparison of the Netherlands and the UK. J Environ Assess Policy Manage. 14(4):1250025.
  • Barker A, Wood C. 1999. An evaluation of EIA system performance in eight EU countries. Environ Impact Assess Rev. 19(4):387–404.
  • Beanlands G, Duinker PN. 1983. An ecological framework for environmental impact assessment in Canada. Halifax (NS): Institute for Resource and Environmental Studies, Dalhousie University.
  • Bisbal GA. 2002. The best available science for the management of anadromous salmonids in the Columbia River Basin. Can J Fish Aquat Sci. 59(12):1952–1959.
  • Briggs SV. 2006. Integrating policy and science in natural resources: why so difficult? Ecol Manage Restor. 7(1):37–39.
  • Caldwell LK, Bartlett RV, Parker DE, Keys DL. 1982. A study of ways to improve the scientific content and methodology of environmental impact analysis: final report to the National Science Foundation. Bloomington (Ind): School of Public and Environmental Affairs, Indiana University.
  • Canelas L, Almansa P, Merchan M, Cifuentes P. 2005. Quality of environmental impact statements in Portugal and Spain. Environ Impact Assess Rev. 25(3):217–225.
  • Cape L, Retief F, Lochner P, Fischer T, Bond A. 2018. Exploring pluralism – different stakeholder views of the expected and realised value of strategic environmental assessment (SEA). Environ Impact Assess Rev. 69:32–41.
  • Cashmore M, Gwilliam R, Morgan R, Cobb D, Bond A. 2004. The interminable issue of effectiveness: substantive purposes, outcomes and research challenges in the advancement of environmental impact assessment theory. Impact Assess Proj Appraisal. 22(4):295–310.
  • Cheremisinoff PN, Morresi AC. 1977. Environmental assessment and impact statement handbook. Ann Arbor (MI): Ann Arbor Science.
  • De Winter JCF, Dodou D. 2010. Five-point Likert items: T test versus Mann-Whitney-Wilcoxon. Pract Assessment, Res Eval. 15(11):1–16.
  • Dipper B. 1998. Monitoring and post-auditing in environmental impact assessment: a review. J Environ Plan Manage. 41(6):731–747.
  • Duinker PN. 2013. Proposed environmental impact statement for OPG’s Deep Geological Repository (DGR) Project for low and intermediate level waste. [accessed 2016 Jun 23]. www.ceaa-acee.gc.ca/050/documents/p17520/94202E.pdf
  • Enríquez-de-Salamanca Á. 2018. Stakeholders’ manipulation of environmental impact assessment. Environ Impact Assess Rev. 68:10–18.
  • Fischer TB, Noble B. 2015. Impact assessment research – achievements, gaps, and future directions. J Environ Assess Policy Manage. 17(1):1501001.
  • Fuller K. 1999. Quality and quality control in environmental impact assessment. In: Petts J, editor. Handbook of environmental impact assessment: impact and limitations. Oxford: Blackwell Science Ltd; p. 55–82.
  • Gibbons P, Zammit C, Youngentob K, Possingham HP, Lindenmayer DB, Bekessy S, Burgman M, Colyvan M, Considine M, Felton A, et al. 2008. Some practical suggestions for improving engagement between researchers and policy‐makers in natural resource management. Ecol Manage Restor. 9(3):182–186.
  • Glasson J, Therivel R, Weston J, Wilson E, Frost R. 1997. EIA- learning from experience: changes in the quality of environmental impact statements for UK planning projects. J Environ Plan Manage. 40(4):451–464.
  • Gray I, Edwards-Jones G. 2003. A review of environmental statements in the British forest sector. Impact Assess Proj Appraisal. 21(4):303–312.
  • Green OO, Garmestani AS. 2012. Adaptive management to protect biodiversity: best available science and the Endangered Species Act. Diversity. 4(2):164–178.
  • Greig LA, Duinker PN. 2011. A proposal for further strengthening science in environmental impact assessment in Canada. Impact Assess Proj Appraisal. 29(2):159–165.
  • Holling C. 1978. Adaptive environmental assessment and management (Wiley IIASA international series on applied systems analysis). New York: John Wiley and Sons.
  • Jalava K, Pasanen S, Saalasti M, Kuitunen M. 2010. Quality of environmental impact assessment: Finnish EISs and the opinions of EIA professionals. Impact Assess Proj Appraisal. 28(1):15–27.
  • Jones R, Fischer TB. 2016. EIA follow-up in the UK – a 2015 update. J Environ Assess Policy Manage. 18(1):1650006.
  • Kabir SMZ, Momtaz S. 2012. The quality of environmental impact statements and environmental impact assessment practice in Bangladesh. Impact Assess Proj Appraisal. 30(2):94–99.
  • Landim SNT, Sánchez LE. 2012. The contents and scope of environmental impact statements: how do they evolve over time? Impact Assess Proj Appraisal. 30(4):217–228.
  • Lee B, Haworth L, Brunk C. 1995. Values and science in impact assessment. Environments. 23(1):93–100.
  • Lester SE, McLeod KL, Tallis H, Ruckelshaus M, Halpern BS, Levin PS, Chavez FP, Pomeroy C, McCay BJ, Costello C, et al. 2010. Science in support of ecosystem-based management for the US west coast and beyond. Biol Conserv. 143(3):576–587.
  • Mackinnon AJ, Duinker PN, Walker TR. 2018. The application of science in environmental impact assessment. London (UK): Routledge Focus on Environment and Sustainability, Taylor and Francis.
  • Microsoft. 2018. Microsoft Excel (Version 15.32) [A spreadsheet software]. [accessed 2018 Jun 23]. https://products.office.com/en-ca/excel
  • Minitab Inc. 2018. Minitab Express (Version 1.5.0) [A statistical analysis software]. [accessed 2018 Jun 23]. http://www.minitab.com/en-us/products/express/
  • Monteiro MB, Partidário MR, Meuleman L. 2018. A comparative analysis on how different governance contexts may influence strategic environmental assessment. Environ Impact Assess Rev. 72:79–87.
  • Morrison-Saunders A, Annandale D, Cappelluti J. 2001. Practitioner perspectives on what influences EIA quality. Impact Assess Proj Appraisal. 19(4):321–325.
  • Morrison-Saunders A, Bailey J. 2003. Practitioner perspectives on the role of science in environmental impact assessment. Environ Manage. 31(6):683–695.
  • Morrison-Saunders A, Bailey M. 2009. Appraising the role of relationships between regulators and consultants for effective EIA. Environ Impact Assess Rev. 29(5):284–294.
  • Morrison-Saunders A, Sadler B. 2010. The art and science of impact assessment: results of a survey of IAIA members. Impact Assess Proj Appraisal. 28(1):77–82.
  • Noble B. 2015a. Introduction to environmental impact assessment: a guide to principles and practice. 3rd ed. Ontario: Oxford University Press.
  • Noble BF. 2015b. Cumulative effects research: achievements, status, directions and challenges in the Canadian context. J Environ Assess Policy Manage. 17(1):1550001.
  • ObjectPlanet. 2018. Opinio (Version 7.7.2) [A online survey tool]. [accessed 2018 Jun 23]. http://www.objectplanet.com/opinio/
  • Ortolano L, Shepherd A. 1995. Environmental impact assessment: challenges and opportunities. Impact Assess. 13(1):3–30.
  • QSR International Pty Ltd. 2018. NVivo (Version 11.4.2) [A qualitative data analysis software]. [Accessed 2018 Jun 23]. http://www.qsrinternational.com/nvivo/nvivo-products/nvivo-mac
  • Rogers KH. 2006. The real river management challenge: integrating scientists, stakeholders and service agencies. River Res Appl. 22(2):269–280.
  • Roux D, Rogers K, Biggs H, Ashton P, Sergeant A. 2006. Bridging the science–management divide: moving from unidirectional knowledge transfer to knowledge interfacing and sharing. Ecol Soc. 11(1):art.4.
  • Ryder DS, Tomlinson M, Gawne B, Likens GE. 2010. Defining and using ‘best available science’: a policy conundrum for the management of aquatic ecosystems. Mar Freshw Res. 61(7):821–828.
  • Sadler B. 1996. International study of effectiveness of environmental assessment –environmental assessment in a changing world: evaluating practice to improve performance. Ottawa: Canadian Environmental Assessment Agency, 248 p.
  • Sullivan PJ, Acheson JM, Angermeier PL, Faast T, Flemma J, Jones CM, Knudsen EE, Minello TJ, Secor DH, Wunderlich R, et al. 2006. Defining and implementing best available science for fisheries and environmental science, policy, and management. Fisheries. 31(9):460–465.
  • Thompson S, Treweek JR, Thurling DJ. 1997. The ecological component of environmental impact assessment: a critical review of British environmental statements. J Environ Plan Manage. 40(2):157–171.
  • Tzoumis K. 2007. Comparing the quality of draft environmental impact statements by agencies in the United States since 1998 to 2004. Environ Impact Assess Rev. 27(1):26–40.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.