1,350
Views
14
CrossRef citations to date
0
Altmetric
Articles

The functions of knowledge management processes in urban impact assessment: the case of Ontario

& ORCID Icon
Pages 265-280 | Received 21 Oct 2017, Accepted 18 Jan 2018, Published online: 26 Mar 2018

Abstract

Addressing sustainability incorporates multidisciplinary and heterogeneous knowledge that has extended spatial and temporal horizons. Within such context, in contrast to a procedural (prescriptive) approach, a functional or performance-based approach to setting and designing knowledge management process is more suitable. To help set up the functions of knowledge management processes, we examined 30 cases of environmental impact assessment in Ontario, Canada and engaged experts who were involved in these cases. Four main functions are proposed as essential: support the acquisition and use of sustainable knowledge; communicate sustainable practices and harness community input; facilitate coordinated analysis and integrated assessment; and reengineer regulation enforcement to simplify the process. The functions proposed are not intended to be universal, as conditions (legal and technical) will vary from one jurisdiction to another. However, it is hoped that they can be benchmarked by other jurisdictions.

Introduction

Many jurisdictions are considering to formally requiring assessment of sustainability in project appraisal. For those that are not considering such move, environmental impact assessment (EIA) practices have gradually incorporated many aspects of sustainability in the assessment process (Morrison-Saunders and Fischer Citation2010). In both scenarios, public agencies are in need for means to help enhance the processes and practices of knowledge management (KM) during impact assessment. This is because the nature of sustainability analysis and recent sociotechnical trends both emphasize the value and challenge of proper management of knowledge in impact assessment. Sustainability is a contested concept (Blewitt Citation2014), a multidisciplinary domain, with extensive subjectivity and expanded spatial and temporal scales. Progressively, citizens are becoming a major stakeholder in the process. With their increasing technology acumen and access to knowledge resources, they are expecting a sustained and customized access to information (see El-Diraby and Wang Citation2005). More importantly, the evolving practices of co-creation in planning advocate an epistemology of knowledge multiplicity. Citizens have equally valid knowledge that should be harnessed. In co-creation, the role of public agencies is not to decide, but rather to seek and actualize multiple knowledge(s). Such knowledge is essentially context-dependent.

Post-modern assessment (Fischer Citation2003) continually seeks to strengthen institutions and governance and open decision-making process, instead of the traditional, yet simple, linear, and technical practices of EIA. Considering sustainability in assessment means to promote issues such as dialog, negotiation, and cooperation (OECD Citation2006). This will require complementing procedural or substantive rationality with deliberative rationality (Jiliberto Citation2011).

It important to note here that managing knowledge relates to two fundamental concepts. The first is knowledge itself, such as facts, wisdom to make decisions, best practices, models, and analysis software. The second is the process of acquiring and using knowledge itself, which is the focus of this paper. The complexity of sustainability assessment highlights the importance of designing efficient and agile processes for KM. Co-creation means that knowledge is contextual and evolutionary: it emerges from the interactions between stakeholders and is largely based on their own knowledge models. In this view, knowledge is networked (or distributed). This means that KM processes have to be designed to be flexible (to accommodate local conditions), collaborative (be designed with active involvement from all stakeholders), and adaptive (morphs to address evolving needs and implement newly generated knowledge).

In such a dynamic environment, process managers, whether in public or (large) private agencies, who want to enshrine the generation, consideration, and use of knowledge in assessment processes have realized the need to complement the traditional prescriptive approach to assessment processes with a functional one (see Arts et al. Citation2012 for a historical reflection on the progress of practice; and Cohen Citation2010 for a legal perspective on the need to accommodate the balance between the two approach). The former, which dominated the initial EIA practices, includes specific rules and tasks to be performed. The rigidity of this structured model may have contributed to the devolution of EIA into an administrative box-checking process (see Bond et al. Citation2018). The latter, in contrast, establishes a set of objectives to be achieved – still with the ability to include (a streamlined) requirements for box-checking (see, for example, discussions by Morrison-Saunders and Retief Citation2012 on the co-existence of the two approaches in the evolved EIA practices in South Africa). Flexibility in process structure supports the needed contextualization of managing knowledge, accommodate the rapid changes to knowledge, and opens the door for innovation. More importantly, a functional approach shifts the evaluation from a compliance mentality into a performance mentality: how effective was the process, how engaged were citizens, how valuable and relevant was the knowledge generated and used?

A prescriptive approach can be efficient, in fact needed, in the early stages of assessment practices (see interesting discussion about the practicality of assessment and value of structured process by Fischer and Gazzola Citation2006). In contrast, for jurisdictions with established practices (especially those that developed into a clear compliance-oriented mentality), the objective of the paper is to suggest a set of functions that can guide the design of assessment work processes to enable better management and use of knowledge. Within a process-oriented and an evolutionary view of knowledge and the emphasis on performance and engagement, what actions and performances are required to be achieved? These can help draft regulations and policies that, on one hand, embed KM as part of any consideration of sustainability in impact assessment, and, on the other hand, do so in a manner that is flexible to accommodate local conditions, the nature of the project, and that empowers citizens to co-create ideas and solutions. Our analysis addressed two major dimensions: ‘the what to do’ (functions needed), and ‘the how to do it’ (organizational change paradigms). The proposed framework does not aim at presenting tools to quantify some of the subjective items in the assessment. Rather, to present a proposal to support the restructuring of work processes to enhance the efficiency of KM. Our work advocates a migration from the administrative and repository-based approach of KM (which dominated EIA practices) into a collaborative, processual, innovation-supporting approach. It is also helpful in defining gaps – an essential part of any organizational and policy transformations (for example, see the work by Fischer Citation2006 on the implementation of strategic environmental assessment in transport projects). Building better KM capacity and the use of advanced communication tools within the context of leaner organization can help mitigate repeating some of the possible problems of traditional EIA practices, such as limited innovation, lengthy controversial debates, less effective community engagement, or the limited organizational transformation to sustainable practices (see, for example, Noble and Nwanekezie Citation2017). At the same time, this can also help exploit recent technology advancements to harness knowledge and boost innovation.

To present the ‘functions’ of KM processes, we adopted a method called Functional Analysis System Technique (FAST), which is a part of value engineering. Value analysis is a best practice and decision support tool to develop alternative solutions to problems through an in-depth study of its basic functions. FAST aims to systematically establish a hierarchy of functions for a program, project, product, system, or a process. By formalizing the structure of required functions, it facilitates more consistent evaluation of the performance of systems: how did each action contribute to achieving the required functions of KM processes. Typically, FAST is presented through a diagram that can be read both left-to-right and right-to-left. The former begins with the higher order functions, decomposing to the right, and at each interface answers the question ‘How can this function be achieved?’ The latter addresses the question ‘Why is this function necessary?’ The question of ‘How?’ relates to the performance of the functions and defines their order whereas ‘Why?’ relates to the goals of said functions and assesses the logic behind the choice of function description/position (Snodgrass and Kasi Citation1986).

This paper starts with reviewing related work in the areas of KM especially as it relates to its challenges when considering sustainability in impact assessment. We link these to the lessons learned from EIA and to the aspirations of co-creation. Then we discuss functional analysis, its importance and tools. The research method is detailed after that, including profiling the cases reviewed and the experts interviewed. We then present the proposed functions and a set of corresponding organizational cultural paradigms needed to support effective change management.

Related work

Considering (elements of) sustainability in impact assessment in policy, program, and project making is becoming a formal requirement in several jurisdictions – for example, the European directive on non-financial reporting (see Fischer et al. Citation2015). Even in jurisdictions where sustainability analysis is not formally embedded in the impact assessment process, the scope of EIA and strategic environmental assessment has been expanding to include several aspects of sustainability (Fischer Citation2007; Morrison-Saunders et al. Citation2014). Effective KM is key to the success of any impact assessment, particularly when sustainability is being considered (Bond et al. Citation2010). Some of the main challenges to KM processes include the following (see Gibson Citation2005; Thomson et al. Citation2011):

Expanded spatial and temporal horizons: considering sustainability requires balancing possible conflicts between global, regional, and local objectives (both environmental and socioeconomic) across longer time spans.

Multidisciplinary nature of knowledge: relevant assessment knowledge is diversified. To illustrate, in conducting assessment (in a developing country) Lambert et al. (Citation2011) identified 14 different ‘functions’ for sustainable civil infrastructures projects. These include: create employment, reduce poverty, improve connectivity and accessibility, increase industrial/agricultural capacity, improve public services and utilities, reduce corruption/improve governance, increase private investment, improve education and health, improve emergency preparedness, improve refugee management, preserve religious and cultural heritage, improve media and information technology, increase women’s participation, and improve environmental and natural resource management. While not all of these are needed in every project (Tajima and Fischer Citation2013), the selection process is not easy.

Subjectivity of analysis: with the increased importance of socioeconomic assessment, the analysis parameters are becoming more subjective with varying (or contradicting) predictions that are sensitive to debatable assumptions (See Pope and Petrova Citation2017).

Complexity of trade-offs: the expanded scope of assessment and the increasing number of stakeholders with conflicting objectives create a complex network of tradeoffs (see Soria-Lara et al. Citation2016).

Multiplicity of knowledge: the active engagement of citizens and their increased technical agency mean that KM processes have to be designed to broker agreements between different knowledge models (see Kinawy et al. Citation2017).

The social context and the needs of sustainability assessment make the process complex with controversial positions and views by most stakeholders. Decision-makers span several administrative levels (national, regional, local) and systematic tiers (policy, planning, program, and project) in addition to cross-section considerations, including, for example, energy, land use, economic development (see also Hayes et al. Citation2017). Effective communication is key to handling complexity and diversified stakeholders. This should be coupled by commonly agreed upon objectives. Increasingly, using prescriptive guidance on developing such objectives is becoming hard – in many cases not suitable. The interplay between social, environmental, and economic features of the project are context-dependent and require understanding and embracing of local knowledge. Often structural changes take place in situations of disagreement (or when truly innovative solutions are being perused) or, as a reflection of the misalignments between different authorities or due to misallocation of resources. However, the evolution of a leading role for communities not only as decision-makers, but also as knowledge creators and processes designs is having the most significant impact on the assessment practice.

Knowledge spans two major dimensions: assessment knowledge (measuring impacts or benefits) and processual knowledge (the skillful management of the assessment process). The first can be divided into: objective (science-based and formal), which relates to the hard facts and numbers; and subjective, which relates to conceptualization of possibilities (expecting/imagining consequences or events) and assigning values to preferences (see Bäckstrand Citation2003). Of course, these categories are highly related to indigenous knowledge: the views and experiences of local residents. The second is divided into: integration knowledge (how to reconcile and combine assessments from different disciplines); and, what McClean and Shaw (Citation2005) call, bureaucratic knowledge (civil servants know-how needed for securing quick approvals and compliance with rules). Such expanded view of knowledge did not play a major role in EIA as it devolved, in many cases, to an administrative and reactive process – paper trail is what matters (Bond et al. Citation2010). However, a rich bureaucratic knowledge was generated over the last four decades of implementing EIA. Benchmarking such pitfalls is the first step in enhancing KM when considering sustainability in the assessment (Edelenbos et al. Citation2011).

In designing KM processes, the diversity and subjectivity of the decision criteria emphasize a need for better means to enhance knowledge capture and reuse, including streamlining communication and flow of information between stakeholders, coordinate analyses, and support innovation. Olagunju and Gunn (Citation2016) identified three silo effects. First, institutional – which can have negative impacts on coordination, and clarity of goals and expectations. Second, disciplinary – which can lead to limited communication and skepticism around data sharing. Third, transactional – neglecting the collective social and environmental impacts for the individual narrow perspective. They emphasized the role of learning and multiple domain expertise as opportunities for enhancing cross-domain integration.

The implications on decision-making are clear. The selection of decision criteria and the design of the process itself become more adaptive to the project and community context. Discussed four applicable decision models: (1) theory-based, where formal top-down indicators are selected; (2) data-driven, where the ability for assessment is controlled and driven by available data; (3) politically driven, where selections are motivated by politics or legal requirements; and (4) situation-driven, which means the project situation is what drives the selection of decision indicators (see also Lehtonen Citation2009). Increasingly, the latter model supports that even the design of the process itself becomes more adaptive to the project and community context. In fact, the very model of (strategic environmental) assessment is evolving to span and balance two fundamental approaches: impact assessment-mode (where compliance and EIA-like technical assessment are practiced); and strategic transition mode, where actualization of the condition that enable institutional change to support and lead change in governance toward sustainability is the objective (Noble and Nwanekezie Citation2017).

Collaborative planning popularized new forms of participation that support a migration from a decision-making mentality to an empowerment culture and from a rationalist and a theory-based approach into an approach that integrate rationality and deliberation approach. Collaborative planning transfers the objectives of KM processes from the realm of dutiful to that of actualizing (Wells Citation2015). To this end, knowledge generation and learning is an important criteria in evaluating the success of assessment processes as much as procedural effectiveness, substantive effectiveness (leading to change), transactive effectiveness (value to stakeholders), and normative effectiveness (achieving goals and accommodating all affected) (Bond et al. Citation2012). In fact, social learning is a tool and an objective for modern urban governance (Jha-Thakur et al. Citation2009).

The bottom-up, context-sensitive nature of assessments in urban settings as well as the new role of KM processes as an empowerment and learning mechanism highlight the need for adaptive KM processes structures (see Krueger et al. Citation2012). The design of KM processes must also accommodate recent sociotechnical changes. In the evolving knowledge-intensive economy, knowledge production and reuse involves thinking, and it is often collaborative and iterative, which can make knowledge-workers (in companies and public agencies) view prescriptive processes as procedural bureaucratic annoyance (Davenport Citation2015). In designing KM process, we have to embed flexibility and adopt a mentality of service: participants finding value in the discipline that a process structure brings, while remaining free to be creative and, if needed, improvisational.

Davenport (Citation2015) defines four categories of work processes based on the level of interdependence (from a single actor to group work) and complexity of work (from routine to interpretational/judgment):

Transactional model (an individual actor doing routine job): repeated tasks that are highly dependent on formal rules and training. This model provides for low discretion to work forces or, lately, automation of tasks.

Expert model (an individual actor doing a job that requires interpretational tasks): this model is judgment-oriented and is reliant on individual expertise. It depends on a start performer.

Integration model (groups working on routine tasks): this model relates to systematic and repeatable work. It is highly reliant on formal processes, methods, and standards. Its efficiency depends on high integration across functional boundaries.

Collaborative model (groups working on an interpretational work): this model relates to improvisational work and is highly reliant on the expertise of multiple functional players. It is ‘processed’ through fluid deployment of flexible teams.

Generalized and prescriptive processes can lead to significant mismatches in setting common language (Kinawy et al. Citation2017) or co-developing project objectives (Golobič et al. Citation2015). In fact, significant differences can be found in assessments in the same region. These are not attributed only to the differences in region or project scope, but simply, the differences in the consulting teams. Under the same conditions, they made different assumptions and, in fact used different terminology and models of the assessment process, which can hinder overall consistency and the very understanding and approval of assessment (Fidélis et al. Citation2016). In contrast, while adaptive processes can provide adequate means to handle the unstructured nature of situation-based analysis, a fuzzy process may fail or delay the decision. This is why balancing statutory planning regulations are essential to the success of the assessment (Thygesen and Agarwal Citation2014). A deliberate and context-responsive balance is always more advantageous to address the challenges of the post-modern assessment, for example: ‘Transparent and consistent value frames, the consideration of traditional decision making approaches, systematic tiering, a willingness to cooperate, an acknowledgment of uncertainties and appropriate funding’ (Fischer Citation2005).

It can be argued that KM (for sustainability assessment) within a co-creation culture requires added emphasis on the collaborative model. Traditional EIA may have evolved into an integration model with extensive implementation of the transactional model and reliance on the expert model. Collaborative work model is a challenge to process analysts and managers. The iterative and evolutionary nature of KM in this mode, makes it hard to formalize repeatable steps of work: ‘every day is different and knowledge generated is constantly fed into the assessment and, equally important, in redesigning the processes to match the needs for formalizing discovered knowledge’ (see El-Diraby Citation2006).

The design of collaborative aspects of the processes must assure agility and support continual benefit from the integration model. Combining all input and analyses will always remain an important part of the assessment practice. In fact, elements of the expert and transactional models will remain essential to the success of the assessment exercise. The current need, however, is to adopt and strengthen the collaborative model of the process. Therefore, such design must focus on functional analysis rather than process modeling. According to the Information Technology Infrastructure Library (Pieper and van der Veen Citation2005), a business process is defined as: ‘a structured set of activities designed to accomplish a specific objective. A process takes one or more defined inputs and turns them into defined outputs’. A function is a result-oriented action performed by a device, department, or person that produces an outcome. A function (of a process) remains fixed whereas the means of implementing it (structuring and sequencing the actions) generally changes. A function relates to defining what is to be accomplished by a system and not how it is to be accomplished. Process implies a sequence of tasks, while a function describes an action or an objective related to a certain level of performance in achieving results.

The Function Analysis System Technique (FAST) aids in thinking about problems objectively and in identifying the scope of a project or a process independent from its implementation practices. It aims at discovering the essential value within and performance required from a system. When modeling a system, we become locked into a usual course of action. Using abstract thinking about what needs to be done and what creates value, not how to do things, we create more opportunities for innovation. Functions are typically expressed as combinations of verb and nouns. The verb answers the question, ‘What is to be done?’ It defines the required action. The noun answers the question, ‘What is it being done to?’ Identifying a function in the broadest possible terms provides the greatest potential for divergent thinking because it gives the greatest freedom for creatively developing alternatives (Miles Citation2015).

Methodology

Our method in conducting this research was based on two resources: case studies and expert interviews. We examined 30 cases of EIA in Ontario, Canada and interviewed experts to help define the required functions. We targeted EIA because our scope is to help jurisdictions that are migrating from traditional EIA legislations to legislations that formally include consideration of sustainability. This does not mean that EIA does not cover (elements of) sustainability – in most jurisdictions, they have evolved to do so. The objective is: how can we learn from (the evolution of) EIA practices to build new process structures that are more effective in handling the complexities of KM in impact assessment. In such descriptive research approach, we can learn from the problems, mistakes, innovations, and new approaches devised by EIA practice to accommodate the needs of KM in making any new formal requirements for KM more effective: learn from the mistakes, avoid the pitfalls, and benchmark best practices.

Our methodology includes a mix of tools. A bottom-up discovery of issues and possible functions and paradigms was conducted through an analysis of 30 EIA cases in Ontario. This was matched by top-down input from experts. Experts were interviewed informally during the case analysis. A preliminary set of functions (FAST) was developed. This was the subject of formal interviews with experts to check on consistency and relevance of functions. Literature reviews were conducted throughout: at the initial stages to scope the issue and synthesize previous work, during the conceptualization stage to draw on the findings of related analysis, and after the development to assess the relevance of the proposed functions to organizational (reengineering) paradigms in the overall context of impact assessment. Figure shows the main steps of our research methodology.

Figure 1. Research methodology.

Figure 1. Research methodology.

Case Studies: before selecting the cases and starting the analysis, we, first, conducted informal interviews with 16 experts in the domain – mainly public officials, EIA practitioners, engineering/ planning consultants, as well as NGOs (non-government organizations). These were unstructured interviews with the aim of soliciting input regarding the state of EIA and its potential role in impact assessment. Our aim was to set up the scope of our work based on practitioners’ expertise and to solicit relevant cases to study.

We reviewed 30 EIA cases in Ontario. Of these 60% were transit or transportation planning related, 27% were landfill, and 13% were others such as residual waste, treatment plants, and waterfront improvement. About 83% were individual EAs and 17% were Class EAs. About 73% involved urban areas and 27% involved rural areas. Throughout this stage, we met with 23 professionals who were engaged in these cases to further refine these summaries. These were informal meetings and followed an open/unstructured interview style. They played a major role in helping us select and access the cases and, more importantly, capture some of the case backgrounds.

We reviewed, for each case, the actual reports and correspondences as well as media articles related to the case. We tabulated the contents of each case to summarize the issues, alternatives considered, consideration of innovative solutions, potential new knowledge generated, community engagement and feedback, duration and estimated costs – we consulted with participants frequently.

Literature reviews: this work included two activities. First, synthesis of published research papers. These covered work theorizing EIA, case studies in conducting EIA, critique of EIA practices and discussions of EIA upgrades and its relationship to impact assessment. Second, review of relevant EIA legislations in relevant countries: USA, Europe, and Australia. The aim was to put the level of depth and procedures of KM in Ontario EIA into context.

Conceptualization: the modeling part of our work included three outcomes. First outcome was a FAST for KM in impact assessment: a functional model to define the required changes/upgrades in current practices (in Ontario) to support a more efficient, knowledge-enabled impact assessment: what actions and processes must be established in order to support KM? The second outcome is a set of guiding principles that should guide the implementation of KM functions. These are organizational responsibilities that must be maintained to sustain quality in conducting impact assessment: what must an organization do to make sure KM process run effectively? Finally, the third outcome of our modeling exercise is a prototype knowledge matrix, which is meant to help document future cases of impact assessment in a manner that captures the nature of KM conducted. This includes methods used, who was involved and what were the results of the case. Upon finishing each future case, teams of participants should use this matrix to annotate the case – which should enhance its retrieval in future studies.

Formal Interviews: a set of 18 formal interviews were conducted with domain experts to evaluate the first two parts of this framework. Eight of the interviewees were public officials at the municipal and regional levels; six were from consultants specializing in conducting EIA; four were from planning/ engineering design firms. The format of the interview was a semi-structured one. The first part included a set of formal questions requiring experts to score the significance/relevance of the proposed functions on a scale of 1–5 (with 1 being low and 5 being the high score). The second part was an open-ended question asking experts to add any additional functions as they see fit. The third part included asking experts to evaluate which of the guiding principles apply to each function. The first author recorded experts’ responses/score during the session. Comments by experts were also documented.

The use of measuring scale is not meant to reflect (universal) importance ranking of the proposed functions, which is hard to establish in this regards. For these reasons, we did not use the term ‘importance’ in defining the scale or in asking the experts (we used the term ‘significance score’). The objective of the interviews was mainly to assess relevance, consistency, and solicit additional functions. For more discussion about soliciting expert opinion in this field, see Krueger et al. (Citation2012). However, for the record, the supplementary material includes the values assigned to each function/sub-function. It is important to restate here that the functions developed in this work are not meant to be exhaustive. Rather, they are meant as framework to support detailed analysis in interested jurisdictions. To this end, the functions, while generic in nature, are developed based on work and cases conducted in Ontario, Canada. It is hoped, that other jurisdictions can benefit from benchmarking/customizing some of them for reuse in their local context.

The functions of KM in impact assessment

To support the migration from an administrative-oriented process to a knowledge-enabled process, this paper presents a conceptual framework which considers the necessary functions to be performed in impact assessment. The framework aims to ascertain which policy and governance strategies not only make use of, but promote and support the growth of KM systems. The results obtained through the application of functional analysis can be employed as a means for developing and/or evaluating alternative process designs for KM in impact assessment. They are not meant to be the actual processes – rather a means to guide their development. It is important to clarify that the proposed functions are not universal. No such model can be claimed given the diverse difference in technical, legal, and cultural contexts. The work presented here is intended to be a sample of such models for the case of Ontario, Canada. It can be benchmarked by other jurisdictions based on local contexts.

As shown in Table , the main functions proposed for KM process in impact assessment are: support the acquisition and use of sustainable knowledge, communicate sustainable practices, facilitate coordinated planning, and enforce the Act. The following sub-sections discuss these functions along with relevant background and related topics (based on literature reviews and input from experts). Note: due to the large number of sub-functions, we present the FAST as a table.

Table 1. KM functions.

Support the acquisition and use of sustainable knowledge

The main challenge of KM is connecting the right users with the right knowledge at the appropriate point in time. This requires creating supportive organizational structures, streamlined communication and decision-making flows, and emphasizing collaboration and diffusion of knowledge by putting the relevant IT tools into place. This is becoming harder every day because of the proliferation of distinct tools for impact assessment. In fact, authors such as have discussed the possibility that assessment professionals tend ‘to invent new tools rather than to modify existing ones for reasons that have little to do with effectiveness or efficacy’.

At the core of creating a healthy KM environment is an organizational culture that seeks to ‘learn its way into sustainable future, rather than plan its way (Slootweg and Jones Citation2011)’. Impact assessment needs change in mentality and, at the same time, is one of strongest and most effective ways to promote change. In fact, effective impact assessment is not achievable without embracing the critical role of learning, including consistently harnessing informal knowledge; and using KM to address interdisciplinary and trans-disciplinary working practices (Gasparatos et al. Citation2008). Achieving institutional change is always challenging. Administratively, this, in part, is related to inadequate examination of past failures, and the traditional risk averseness of bureaucrats, which prevented innovative initiatives to address complex issues – the aim in EIA focused on satisfying regulatory obligations. Politically, willingness to change can be typically restricted by higher level policies, or due to conflicts between policy priorities, or the regional or national political environment (especially during economic recession).

The main sub-functions proposed here are:

Support consistency in the processes of acquisition and use [of knowledge]: while knowledge itself is contextual, there is a need to consistently track knowledge as a main task and objective of any assessment. This includes tasks for identifying how to link knowledge needs to issues being considered; identify acquisition techniques; and establish procedures for integration.

Establish an interoperable knowledge model: this is a long-term and evolutionary task that aims to create and constantly upgrade a common data model or an ontology to provide consistency and interoperability between different knowledge constructs and data format.

Support innovation and reengineering: unlike EIA, the centrality of knowledge in impact assessment requires that we pursue new ideas and support changes to process structure to support adaptability. Benchmarking and collaborating with other jurisdictions is very helpful and so is support for research and development; and standardize methods for post-assessment evaluation and feedback (see Morrison-Saunders et al. Citation2014).

In addition to these formal tools, still, we should recognize the importance of informal networks and effective communication to encourage KM.

Communicate sustainable practices

In today’s sociotechnical culture, citizen science and social learning are realities. It refers to engaging people in seeking technical knowledge through different means such as crowdsourcing. KM strategy cannot be effective unless it serves the new decision-making partner: the community (see Vigar Citation2017). KM strategy must include means to harness and deploy relevant knowledge and practices to support social learning, including raising awareness and creating an environment for ‘knowledge exchanges’. At the core of this is the development of protocols for assuring the following (Evers et al. Citation2016): common understanding of the problem; defining possible solutions and the accompanying consequences, appreciation of peoples’ interests, and values; reflecting on one’s own personal interests; realizing the holistic and integrative thinking and its use and application; and learning about methods, tools, and strategies to communicate well. In addition to supporting organizational learning, public agencies need to encourage social learning, where citizens and professionals/bureaucrats learn from each other (see also: Sinclair and Diduck Citation2001; Sinclair et al. Citation2008). This creates ‘a change in understanding that goes beyond the individual to become situated within social units or communities of practice through social interactions between actors within social networks (Reed et al. Citation2010)’.

In order to make effective use of knowledge once it is acquired it must be communicated in suitable manner to all involved. In particular, impact assessment processes must encourage feedback by engaging citizens through making information available. To accomplish this, it is important to understand the audience to set minimum communication requirements and establish matching outreach plans. Communication should flow both ways. In developing communication strategies, advanced impact assessment processes do not only seek to listen to citizens. Rather, provide information and support to empower citizens in checking, assessing, or commenting on scientific observations as well as to collaboratively develop, promote, and evaluate their own ideas/solutions. To effectively communicate relevant information to knowledge generation by citizens, impact assessment administrators need to continuously profile citizens and their needs and study and deploy best practices in fostering innovation and idea development (Corral and Monagas Citation2017).

The main sub-functions suggested here are:

Encourage feedback: this includes reaching out to stakeholders during and after the assessment is done; making sure to match communication tools to community needs; and provide information in a reliable and consistent manner.

Understand the audience, including profiling stakeholders and their groups (see Nik Bakht and El-diraby Citation2016).

Establish minimum requirements/communication plan: no matter how small the impact assessment is, there must be a set of minimum standards of quality for any communication plan. This includes: identifying the issues to communicate; clarifying communication tools to be used; and establishing frequent and targeted outreach events.

While reaching out in participatory approaches is highly desired, it is not always easy to manage: it is hard to bridge the cultural gaps and differing knowledge constructs between participants and between them and experts. In fact, authors have recorded that many participatory approaches fail to result in more informed and effective policies due to over-romanticizing (Hurlbert and Gupta Citation2015). The reality is that within the realm of co-creation, decisions are made through forming a complex, dynamic, and interactive network of influences, known as an Actor-Network (Law Citation1994). Problem analysis happens in rounds, where stakeholders’ agreement evolves through common understanding and effective communication. All technical and official decision-makers interact with each other and with the public representatives during this process toward a consensus point which Bruijn and Heuvelhof (Citation2000) referred to as a ‘package deal’. In fact, Chinowsky et al. (Citation2008) considered project or policy evaluation as a network of interactions, interdependencies, and information exchange. They argued that a network perspective of projects can be more mature than the traditional hierarchical management structure. They proposed the use of social network analysis to model the interaction and the power balance between actors.

Web-based tools, such as social media outlets, provide for transparent sharing of data and a vibrant exchange of views in relationship to scoping and conceptualization of the problem; envisioning and goal setting; framing of decision-making approach; cross-checking facts and data; and evaluation of interim solutions. While not a replacement for face-to-face interactions with its superior ability to social learning, web-based tools are convenient, have wider reach, and cost-effective (See Sinclair and Diduck Citation2017).

One key role for impact assessment administrators in this regard is to promote and protect the debate, in an unbiased manner, from irrelevant or untruthful information. Because of the proliferation of information, videos, and images in online media, almost any opinion can find justification. People can then ‘cherry-pick’ convenient information: ones that matches their beliefs (Voinov et al. Citation2016).

Facilitate coordinated analysis

The KM strategy must promote knowledge collection and harmonization to support key aspects of impact assessment: coordinated decision-making, diversified stakeholders with varying knowledge constructs. The fact that biophysical, social, and economic systems are interconnected across projects and across regions, requires exposing interdependencies and creating linkages not only between project features but also in terms of knowledge used. We need to study interrelationships between the themes, formalize impacts across the themes in a comparable manner, and create common approach for cross-project evaluation to reveal and evaluate the existence of tradeoffs. An open and iterative process structure and the use of multidisciplinary teams is essential to forging a consistent approach for methodological integration

Knowledge integration can be hard due to lack of mutual understanding and respect among practitioners. KM strategy should include means for creating a supportive and collaborative culture for knowledge exchange. Guidelines for defining the scope and goals of the assessment are a first step. Is the goal to assess existing scenarios or conducting an iterative process for creating new scenarios? In the latter, it is important to develop clear lines between the creative task and the assessment work. Then, planning for collective assessment of scenarios. Within this process, the following questions have to be addressed: what outcomes define success? What obstacles need to be addressed? Who are the actors, stakeholders, and interested parties? What level of comprehensiveness is necessary? What are the geographical and time boundaries for the study?

The increase in private sector leadership in supporting sustainability as well as the increased reliance on outsourcing by public agencies, means that in addition to compliance management, public agencies should consider their role: what functions are more valuable, how much should be outsources, how to support further leadership by private sector. This will require new initiatives and research into reengineering the scope and depth of public agencies’ contributions to collective action and learning. The increased role of what Kørnøv et al. (Citation2015) call street level bureaucracy (more engagement scope and depth) highlights the needs for reengineering consideration.

The suggested sub-functions in this regard include:

Promote coordination: it is important to create a common understanding on the need for coordination and how to conduct it. Impact assessment officials should support this by clearly identifying and profiling needed participants and encourage their collective agreement on strategies and plans for promoting and handling the coordination requirements.

Provide analytical tools: with the increasing formality of knowledge capturing and representation, applying analytics can help in discovering gaps or areas of overlap between impact assessment teams and between them and teams of other SAs.

Support innovation and reengineering: similar to previous functions, and in regards to coordination tasks and efficiency, this sub-function focuses on benchmarking, supporting research and development and tracking trends.

Reconciling data heterogeneity and incompatibility of analysis tools are major problems. The ad hoc nature of team formation and their roles (compared to the more standardized fashion in EIA) in impact assessment is another major challenge. Who will do what within one assessment and in cross-project assessment? The issues of access and security of information and privacy have to be also addressed. At a more subtle level, we must plan to address the discrepancy between actors’ expertise and knowledge processing abilities caused by financial and intellectual gaps between stakeholders, i.e. the digital divide. The power balance between stakeholders is another major issue to handle, including assuring that economic and political influence are not trumping knowledge utilization (Knorr-Siedow and Tosics Citation2005)

Enforce the act

In setting KM strategy, the collection and use of best practices are important not only for harnessing and disseminating technical and processual knowledge, but also best practices in navigating the complexity of regulations. The extensive debate about the balance of power or potential of collaboration between citizens and experts neglects an essential third actor: bureaucratic officials (civil servants). Their knowledge (gained from EIA shortcomings) of political and administrative procedures is essential to avoid impact assessment devolution to an EIA style of box-ticking process, and, at the same time, help expedite the more complex tasks of impact assessment. The focus here is to use the wealth of bureaucratic knowledge for streamlining the process, extinguishing redundancies by promoting effective generation of alternatives, optimizing decision-making tasks, and validating actions through monitoring and auditing mechanisms.

Constant analysis of the match between the assessment process and real-world needs is also important. Markolf et al. (Citation2015) presented a system based on text classification to help governments evaluate the relevance of adopted policy/ assessment frameworks.

If impact assessment is to avoid devolving into a lengthy or continuous administrative exercise, adequate policy framework is needed to guide the process – especially in relation to describing how both the structured and flexible parts should be conducted and balanced. ‘Much of human reasoning is an exercise in creative rationalization to defend and promote things in which people have a vested interest (Taylor et al. Citation2004)’. Bureaucrats, being the constant actor in impact assessment management, have expertise in spotting and handling the conflicting interest. They also can be very helpful in setting up the learning goals based on the project and community profile (White and Noble Citation2013). This role is highlighted as practice in EIA has evolved to include regular outsourcing of many tasks to private sector and the increasing investment and leadership by private sector in considering sustainability in their practice (see Movassaghi and Bramhandkar 2012).

Aside from the obvious, enforcing the legislation also involves ensuring that the authorities responsible for administering and managing the process are accountable for their actions. To address this issue the US Department of Energy conducts periodical analyses and presents the results in quarterly lessons learned reports – especially in relation to enforcing the National Environmental Policy Act (NEPA). In another example, monitoring is emphasized in Australian EIA. The legislation is upheld by various compliance mechanisms (AUS DOEWR Citation2014).

The sub-functions suggested in this regards include the following:

Streamline the enforcement process through documenting milestones and achievements; and establishing indicators to measure process efficiency.

Promote effective generation of alternatives: in contrast to their overemphasis on compliance, bureaucrats can play a major role in the organizational change by shifting their attitude toward generating new ideas and, at the same time, control any over-scoping of the analysis. This requires consistent documentation of rationale for alternative (which can help in future benchmarking); and establishing reusable metrics for evaluating alternatives.

Optimize the decision-making tasks and enhance monitoring and audit: provide the needed balance between standardization of process and the flexibility needed to accommodate the context of PPP. This include establishing a common set of steps to be conducted in all assessments (to help in benchmarking and support consistency); appoint qualified public officials to the evaluation team; and retain qualified technical advisors to enhance the quality and objectivity of technical aspects.

Organizational guiding principles: assuring effective operation

Change is not easy – especially in the context of government and bureaucracy. At the same time, in facing the challenges of impact assessment, governments are offered a variety of analysis/management frameworks. With time, the practice of impact assessment (and EIA before it) will suffer from ‘muddying through’, where tasks, practices, goals, and indicators become static and loose the required agility to face changes (Markolf et al. Citation2015). It is more important to invest in establishing a learning, knowledge-enabled organization than adopting the most recent framework. A consorted effort must be conducted to reengineer work process and change organizational culture to match the sociopolitical dynamics.

A set of fundamental principles should guide the delivery and performance of the aforementioned functions to ensure that the system functions effectively. And to make sure that it is continuously improved to match needs and use/adopt to new knowledge. Based on literature review, these principles have been proposed to include the following (see Hommes et al. Citation2009; Thomson et al. Citation2011; Hoogmartens et al. Citation2014):

Accountability: every stakeholder should be held accountable against a set of benchmarks related to performance measures;

Transparency: work has to be communicated and conducted in a transparent manner;

Life-Cycle Thinking: plans have to integrate all aspects of life-long assessment/costs and impacts of systems/projects;

Collaboration: work processes have to promote collaborative analysis to enhance knowledge sharing and cross-evaluation among regional players;

Reliability: management practices and information use have to be held to very high standards that emphasize reliability;

Efficiency: technical and managerial aspects of assessments have to be conducted in an efficient, less procedural manner;

Consistency: the application of assessment tools should be done in a accordant, compatible, and cohering manner;

Rational Processing: actions should be logical and should be taken by those having or exercising reason, sound judgment, and good sense; and

Objective: decisions should not be influenced by personal feelings, interpretations, or prejudiced, i.e. unbiased.

Expert input

Participating experts were asked to assess the significance and role of each function. To facilitate their assessment, a scale of 1–5 was used. This is not meant to be a universal measure of importance. Rather, more of an assessment of how challenging a function is and how much change is needed to perform it. The details of their assessment are included in the supplementary material. The sub-function ‘Collaborating with other countries’ (Function: Facilitate Coordinated Planning) received the lowest score. ‘Standardize methods of post-project evaluation’ (Function: Facilitate Coordinated Planning) was given a low rating. This may be attributed to the difficulty of such function given the diversity of evaluation techniques and their context-dependency.

The sub-function ‘Engage stakeholders’ (Function: Communicate Sustainable Practices) was the most significant. This emphasizes the increasing role of communities in impact assessment as well as the challenge of harmonizing and brokering knowledge. In fact, overall, the strongest response was in regards to the main function ‘Communicate Sustainable Practices’.

All of the sub-functions contained within the main function ‘Enforce the Act’ were seen as significant (compliance is still the core function). However, any intensification of enforcement beyond what is already practiced would only act to stifle both the efficiency and effectiveness of the process.

To evaluate the relevance of the organizational change principles, during the interviews, experts were asked to indicate which principles ought to be associated with the various functions of KM. Each expert was asked to select the principles that fit each of the proposed functions. The data sample is comprised of 18 responses and can be interpreted as the number of respondents who agreed that a specific principle should be associated with a specific function (the details of their responses and its statistical analysis is included in the complementary material).

Figure illustrates a general comparison of the relationship between functions and guiding principles. It is important to note here that the numbers in the Figure are not meant to be universal scores or scores developed based on an objective analysis. They merely reflect the general sentiment of experts interviewed at the time of this research work. In Figure (a), the range of scores for each principle is shown. The horizontal dimension shows all the sub-functions (total of 36 – see Table ); the vertical dimension shows, for each sub-function, the principals that are relevant to this function and the number of experts who suggested that. For example, for sub-function #10 ‘Engage Stakeholders’, the top relevant principal is ‘transparency’ and this was the view of 16 experts. While all principals tend to have comparable ranges, ‘transparency’ is almost consistently at the top. Frequent emphasis was directed also at ‘accountability’, ‘life-cycle thinking’, ‘collaboration’, and ‘consistency’.

Figure 2. The relevance of principles in relation to proposed functions.

Figure 2. The relevance of principles in relation to proposed functions.

Figure (b) is meant to help study (qualitatively) the interrelationship between principals and functions. It shows the stacks of principals for all the 36 sub-functions (listed in Table ). The vertical dimension lists all 36 sub-functions and the horizontal dimension is a stack of the scores of principals that were deemed relevant for each by experts. The exact values of stacks should not be taken as absolute. The general trend and relative values of the stacks are what can provide insight. How consistent are the assessment (score) of each principal; which functions garners the (need for) most principals, by what scale.

While we collected formal ratings from experts (see supplementary notes), the actual score are not the main target of this part of the survey – the scores will change from time to time and across jurisdictions. By contrasting the stacks of each function, several points can be observed. ‘Identification of issues’, ‘Engage stakeholders’, and ‘Establish overall decision-making procedure’ received the highest overall assessment as the functions needing change – they have the largest (cumulative) stack. They are in need for significant change in comparison to other functions. Frequently, the principal of ‘transparency’ has the highest score, meaning more need for change in this regard. It reaches the highest level in regards to ‘selecting stakeholders’ – an obvious reflection of the growing diversity of stakeholders and the need for inclusion. The functions with the least stack are ‘Support research and development’ and ‘Collaborate with other countries’. While transparency principal is highlighted in almost all functions, its lowest score is recorded in relation to the function of ‘support research and development’.

The sum of scores for all principles in relation to a single function is not meaningful (they are considered mutually exclusive). However, the stacked representation of these scores illustrate that the function that with lowest performance in most principles is the ‘identification of issues’. Next are the functions of ‘Engage stakeholders’ and ‘Establish overall decision-making procedure’. These are essentially the fundamental components of any decision-making process, which means that there is a real need for re-engineering the decision-making framework and the clarity of its processes in Ontario. The function with the least stack (need for change) are ‘Support research and development’ and ‘Collaborate with other Overall’, ‘collaboration’ and ‘consistency’ received relatively high score when considered against all functions. The least support, across all four primary functions, was associated with the principal ‘Rational Processing’. These results have validated the argument that in order to effectively promote sustainability both a knowledge-enabled approach and a business reengineering are required.

A prototype knowledge matrix

We present here a matrix for documenting facts/feature of EIA/impact assessment cases along with relevant knowledge gained in conducting them. The matrix aims to capture an extended description of the cases beyond the direct or specific issues of the case. What can be learned from each case? Each case is, effectively, annotated for ease of follow-up analysis and retrieval. Using this formalized model/matrix, repeated use will create a wealth of cases upon which new cases can draw for best practices or pitfalls.

To illustrate the use of the matrix, a porotype web portal has been developed to capture knowledge features from a sample previously conducted EIA. The knowledge matrix prototype contains five dimensions: (1) General Project and EIA Information (Project); (2) Site and Area Characteristics (Site); (3) Generation of Alternatives (Alternatives); (4) Stakeholder Description and Involvement (Stakeholders); and (5) Impact Categorization (Impacts). In essence, a semantic model of EIA cases is established, where the attributes of a study are recorded. A case has general information related to its location, duration, and cost (to list few examples). Site characteristics impacting the study are also documented. Cases are also distinguished by the level and nature of alternative generation. The profile of stakeholders and their engagement levels and methods is another important feature. Finally, the nature and categories of impacts are a major distinguishing factor in the semantic model of EIA cases. To enrich the semantic description of each cases and support relevant retrieval, a set of key items (sub-dimensions) were developed for each of these dimensions (see Surahyo and El-Diraby Citation2009). These concepts can be used to document knowledge from each case for future needs and are also compiled in such a way so as to support retrieval/trend analysis. It should be noted that, semantically, the five matrix dimensions are intricately connected and possibly dependent on one another but the scope of this discussion is limited to the individual features of each, respectively.

Knowledge matrix was developed for 14 EIA cases and are presented in the supplementary material.

Discussion and conclusions

Impact assessment is now situated in a different sociopolitical context compared to traditional EIA. Traditional EIA is intrinsically normative, evaluative with a scope of KM that is limited/targeted to scientific/engineering evidence finding and analysis. Modern impact assessment embraces a creative, holistic, integrated/multidisciplinary, and (community) empowerment approach to policy-making. While many of these were an add-on to traditional EIA role of assessment (in subsequent years), they are at the core of impact assessment philosophy and should drive the practices of public agencies. There is, then, a methodological challenge for KM: how to manage complexity, subjectivity, and the variabilities of knowledge(s) and how to link these to local contexts. The institutional change is more challenging: developing a culture for coordinated policy-making across sectoral needs and limitations; and building a healthy mechanism for CE to position sustainability at the core of decision-making process.

More sustainability (assessment) functions are being performed outside these agencies due (1) outsourcing; and (2) the increase leadership and proactive work by private sector in assessing and even promoting sustainability. Equally important is the evolving role of communities as stakeholders in knowledge generation as well as decision-making. To support effective consideration of sustainability public agencies should not limit their role to compliance management or rely on private sector consultants to conduct the technical and community aspects of the assessment. A commensurate paradigm shift at the organizational level of public agencies is needed. An increased focus (and more research) should be placed on the role (function) of public agencies in promoting the culture of collaborative learning. These agencies should not limit executing these functions to fulfilling bureaucratic or legislative mandates only. Rather, they should consider options for reengineering their role in a manner that balances the needs for rigorous compliance with the practice and mentality of a ‘learning organization’. Partnership with communities and private sector in creating opportunities for ‘learning’ can be instrumental to realizing the transformative nature of sustainability assessment – one can argue that this is a core function for public agencies in post-modern assessment practices.

The transformative nature of sustainability consideration requires a KM strategy that is more focused on processes of knowledge generation, promotion and reuse not only in the context of technical analysis but in all aspects of the assessment: initiation, definition of issues, community engagement, information delivery, and brokerage of consensus/agreement. KM evolves into a driver and an essential asset to all assessment tasks. KM is not a set of assessment tools or rules. Rather, KM is an organizational competency that should guide work processes and decision-making.

We conducted this study to explore how can we learn from EIA experiences in enhancing sustainability assessment. The study developed a framework of functions/processes as well as organizational principals or competencies that are needed to handle KM challenges. These include the complexities of impact assessment and the diversity of its topics; the expanded temporal and special span of sustainability analysis; and the increasing demands by community for more active involvement in the creation of solutions and decision-making. This is because, at its core, effective KM requires an organizational culture change: embracing an epistemology of knowledge multiplicity; reshaping the role of public agencies as initiators and promotors for innovation and collective decision-making. However, learning from the shortcomings of EIA process can be the starting point for forging a new culture for impact assessment that is centered on better management of knowledge and learning at organizational and societal levels.

At the core of KM practices, public agencies should establish mechanisms for identifying how to link knowledge needs to issues being considered; identify acquisition techniques; and establish procedures for integration. Collaborating with other jurisdictions to benchmark best practices and exchange expertise is another important function. Formally tracking KM trends is an essential task, including formal documentation of the progress of assessment process along with capturing new knowledge; establishing metrics to measure progress in harnessing knowledge; and standardize methods for post-assessment evaluation and feedback.

Within the co-creation mentality, KM strategy must promote effective generation of alternatives. This requires supporting consistent documentation of rationale for alternatives (which can help in future benchmarking); and establishing reusable metrics for evaluating alternatives. To this end, effective communications with communities is now an objective and a tool for harnessing context-aware alternatives. To support effective exchange of knowledge, the communication process in public engagement must assure accessibility, interaction, and outcome orientation. Investments in web and social media usage can help facilitate transparent sharing of data with a key knowledge producer (citizens). Encouraging a vibrant exchange of views is an essential function in harnessing knowledge bottom-up especially in relation to scoping and conceptualization of the problem; envisioning and goal setting; framing of decision-making approach; cross-checking facts and data; and evaluation of interim solutions/ use of advanced analytics can help explore quantifiable/explicit knowledge contracts form community debates. Protecting healthy debates is now a major need in impact assessment. KM strategy must enable adequate access to knowledge and recognize the negative impacts of ‘cherry-picking’ of facts and help filter irrelevant or untruthful information. Furthermore, KM strategy must address the discrepancy between actors’ expertise and technical agency.

‘Identification of issues’, ‘Engage stakeholders’, and ‘Establish overall decision-making procedure’ were identified as the most challenging functions. These are essentially the fundamental components of any decision-making process, which means that there is a real need for reconsidering the decision-making framework and the formality of its processes (in Ontario).

In streamlining the process, establishing and tracking clear and measurable metrics can reduce time and support the use of formal measures to issues such as resource maintenance and efficiency; socio-ecological civility and democratic governance. Unsurprisingly, transparency was identified as the core principal to help create a healthy KM strategy. Several emphasis was placed on accountability, life-cycle thinking, collaboration, reliability, efficiency, consistency, rational processing, and objective analysis.

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplemental data

Supplemental data for this article can be accessed https://doi.org/10.1080/14615517.2018.1445179.

Supplemental material

SUPPLEMENTARY_MATERIAL.docx

Download MS Word (63.4 KB)

References

  • Arts J, Runhaar HA, Fischer TB, Jha-Thakur U, Van Laerhoven F, Driessen PP, Onyango V. 2012. The effectiveness of EIA as an instrument for environmental governance: reflecting on 25 years of EIA practice in the Netherlands and the UK. J Environ Assess Policy Manage. 14(04):1250025.10.1142/S1464333212500251
  • [AUS DOEWR] Australian Government: Department of the Environment and Water Resources. 2014. Compliance and monitoring. http://www.environment.gov.au/epbc/compliance/index.html [accessed 2014 July 16].
  • Bäckstrand K. 2003. Civic science for sustainability: reframing the role of experts, policy-makers and citizens in environmental governance. Global Environ Politics. 3(4):24–41.10.1162/152638003322757916
  • Blewitt  J. 2014. Understanding sustainable development. London (UK): Routledge.
  • Bond A, Morrison-Saunders A, Pope J. 2012. Sustainability assessment: the state of the art. Impact Assess Project Appraisal. 30(1):53–62.10.1080/14615517.2012.661974
  • Bond, A, Retief F, Cave B, Fundingsland M, Duinker PN, Verheem R, Brown AL. 2018. A contribution to the conceptualisation of quality in impact assessment. Environ Impact Assess Rev. 68:49–58.10.1016/j.eiar.2017.10.006
  • Bond AJ, Viegas CV, Coelho CCDSR, Selig PM. 2010. Informal knowledge processes: the underpinning for sustainability outcomes in EIA? J Cleaner Prod. 18(1):6–13.10.1016/j.jclepro.2009.09.002
  • Bruijn HD, Heuvelhof ET. 2000. Networks and decision making. Utrecht: LEMMA Publishers.
  • Chinowsky P, Diekmann J, Galotti V. 2008. Social network model of construction. J Constr Eng Manage. 134(10):804–812.10.1061/(ASCE)0733-9364(2008)134:10(804)
  • Cohen AM. 2010. NEPA in the hot seat: a proposal for an office of environmental analysis. J Law Reform, 169. http://repository.law.umich.edu/mjlr/vol44/iss1/5
  • Corral S, Monagas MC. 2017. Social involvement in environmental governance: the relevance of quality assurance processes in forest planning. Land Use Policy. 67:710–715.10.1016/j.landusepol.2017.07.017
  • Davenport TH. 2015. Process management for knowledge work. In: vom Brocke J, Rosemann M. editors. Handbook on business process management. Berlin, Heidelberg: Springer; p. 17–35.
  • Edelenbos J, van Buuren A, van Schie N. 2011. Co-producing knowledge: joint knowledge production between experts, bureaucrats and stakeholders in Dutch water management projects. Environ Sci Policy. 14(6):675–684.10.1016/j.envsci.2011.04.004
  • El-Diraby TE. 2006. Web-services environment for collaborative management of product life-cycle costs. J Constr Eng Manage. 132(3):300–313.10.1061/(ASCE)0733-9364(2006)132:3(300)
  • El-Diraby TE, Wang B. 2005. e-society portal: integrating urban highway construction projects into the knowledge city. J Constr Eng Manage. 132 (11):1196–1211.
  • Evers M, Jonoski A, Almoradie A, Lange L. 2016. Collaborative decision making in sustainable flood risk management: a socio-technical approach and tools for participatory governance. Environ Sci Policy. 55:335–344.10.1016/j.envsci.2015.09.009
  • Fidélis T, Rosa AR, Albergaria R. 2016. Developing an analytical framework to assess the consistency of contents and terminology used by SEA reports for similar types of plans. J Environ Assess Policy Manage. 18(04):1650024.10.1142/S1464333216500241
  • Fischer TB. 2003. Strategic environmental assessment in post-modern times. EIA Rev. 23(2):155–170.
  • Fischer, TB. 2005. Having an impact? Context elements for effective SEA application in transport policy, plan and programme making. J Environ Assess Policy Manage. 07(03):407–432.10.1142/S1464333205002158
  • Fischer TB. 2006. Strategic environmental assessment and transport planning: towards a generic framework for evaluating practice and developing guidance. Impact Assess Project Appraisal. 24(3):183–197.10.3152/147154606781765183
  • Fischer TB. 2007. Theory and practice of strategic environmental assessment: towards a more systematic approach. London: Earthscan.
  • Fischer TB, Gazzola P. 2006. SEA effectiveness criteria—equally valid in all countries? The case of Italy. Environ Impact Assess Rev. 26:396–409.10.1016/j.eiar.2005.11.006
  • Fischer TB, Gore T, Golobic M, Pinho P, Sykes O, Marot N, Waterhout B. 2015. Territorial impact assessment of european draft directives—the emergence of a new policy assessment instrument. Eur Planning Stud. 23(3):433–451.10.1080/09654313.2013.868292
  • Gasparatos A, El-Haram M, Horner M. 2008. A critical review of reductionist approaches for assessing the progress towards sustainability. Environ Impact Asses. 28:286–311.10.1016/j.eiar.2007.09.002
  • Gibson RB. 2005. Sustainability assessment: criteria and processes. Sterling (VA): Earthscan.
  • Golobič M, Marot N, Kolarič Š, Fischer TB. 2015. Applying territorial impact assessment in a multi-level policy-making context–the case of Slovenia. Impact Assess Project Appraisal. 33(1):43–56.10.1080/14615517.2014.938438
  • Hayes SJ, Barker A, Jones CE. 2017. Re-examining the rationale for strategic assessment: an evaluation of purpose in two systems. J Environ Assess Policy Manage. 19(4): 1750020–1750046.
  • Hommes S, Hulscher SJ, Mulder JP, Otter HS, Bressers HTA. 2009. Role of perceptions and knowledge in the impact assessment for the extension of Mainport Rotterdam. Mar Policy. 33(1):146–155.10.1016/j.marpol.2008.05.006
  • Hoogmartens R, Van Passel S, Van Acker K, Dubois M. 2014. Bridging the gap between LCA, LCC and CBA as sustainability assessment tools. Environ Impact Asses. 48:27–33.10.1016/j.eiar.2014.05.001
  • Hurlbert M, Gupta J. 2015. The split ladder of participation: a diagnostic, strategic, and evaluation tool to assess when participation is necessary. Environ Sci Policy. 50:100–113.10.1016/j.envsci.2015.01.011
  • Jha-Thakur U, Gazzola P, Peel D, Fischer TB, Kidd S. 2009. Effectiveness of strategic environmental assessment – the significance of learning. Impact Assess Project Appraisal. 27(2):133–144.10.3152/146155109X454302
  • Jiliberto R. 2011. Recognizing the institutional dimension of strategic environmental assessment. Impact Assess Project Appraisal. 29(2):133–140.10.3152/146155111X12959673795921
  • Kinawy SN, Bakht MN, El-Diraby TE. 2017. Mismatches in stakeholder communication: the case of the Leslie and Ferrand transit stations, Toronto, Canada. Sustainable Cities Soc. 34:239–249.10.1016/j.scs.2017.06.020
  • Knorr-Siedow T, Tosics I. 2005. Knowledge management and policy application in urban management and housing. Working paper. Leibniz-Institute for Regional Development and Structural Planning, Erkner, Germany. https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q=Knorr-Siedow+T+%2C+Tosics+I+.+2005+.+Knowledge+management+and+policy+application+in+urban+management+and+housing+%2C+working+paper.+&btnG=
  • Kørnøv L, Zhang J, Christensen P. 2015. The influence of street level bureaucracy on the implementation of strategic environmental assessment. J Environ Planning Manage. 58(4):598–615.10.1080/09640568.2013.873711
  • Krueger T, Page T, Hubacek K, Smith L, Hiscock K. 2012. The role of expert opinion in environmental modelling. Environ Modell Softw. 36:4–18.10.1016/j.envsoft.2012.01.011
  • Lambert JH, Karvetski CW, Spencer DK, Sotirin BJ, Liberi DM, Zaghloul HH, Koogler JB, Hunter SL, Goran WD, Ditmer RD, et al. 2011. Prioritizing infrastructure investments in Afghanistan with multiagency stakeholders and deep uncertainty of emergent conditions. J Infrastruct Syst. 18(2):155–166.
  • Law J. 1994. Organizing modernity: social ordering and social theory. Oxford: Blackwell.
  • Lehtonen M. 2009. Indicators as an appraisal technology: framework for analysing policy influence and early insights into indicator role in the UK energy sector. 8th Int. Conf. of the European Society for Ecological Economics; June 29–July 2; Ljubljana, Slovenia.
  • Markolf SA, Klima K, Wong TL. 2015. Adaptation frameworks used by US decision-makers: a literature review. Environ Syst. 35(4):427–436.
  • McClean S, Shaw A. 2005. From schism to continuum? The problematic relationship between expert and lay knowledge—an exploratory conceptual synthesis of two qualitative studies. Qual Health Res. 15(6):729–749.10.1177/1049732304273927
  • Miles LD. 2015. Techniques of value analysis and engineering. Portland (OR): Miles Value Foundation.
  • Morrison-Saunders  A, Fischer  T. 2006. What is wrong with EIA and SEA anyway? A sceptic’s perspective on sustainability assessment. J Environ  Assess  Policy Manage. 8(1):19–39.
  • Morrison-Saunders A, Pope J, Bond A, Retief F. 2014. Towards sustainability assessment follow-up. Environ Impact Asses. 45:38–45.10.1016/j.eiar.2013.12.001
  • Morrison-Saunders A, Retief F. 2012. Walking the sustainability assessment talk—progressing the practice of environmental impact assessment (EIA). Environ Impact Assess Rev. 36:34–41.10.1016/j.eiar.2012.04.001
  • Nik Bakht M, El-diraby TE. 2016. Communities of interest-interest of communities: social and semantic analysis of communities in infrastructure discussion networks. Comput-Aided Civ Inf. 31(1):34–49.10.1111/mice.12152
  • Noble B, Nwanekezie K. 2017. Conceptualizing strategic environmental assessment: principles, approaches and research directions. Environ Impact Assess Rev. 62:165–173.10.1016/j.eiar.2016.03.005
  • Olagunju A, Gunn JA. 2016. Challenges to integrating planning and policy-making with environmental assessment on a regional scale–a multi-institutional perspective. Impact Assess Project Appraisal. 34(3):236–253.10.1080/14615517.2016.1176412
  • Organisation for Economic Co-Operation and Development. 2006. Applying strategic environmental assessment: good practice guidance for development co-operation. Paris: OECD.
  • Pieper M, van der Veen A, editors. 2005. Foundations of IT service management: based on ITIL. Van Haren Publishing.
  • Pope J, Petrova S. 2017. Sustainability assessment: a governance mechanism for sustainability. In: Hartz-Karp J, Marinova D, editors. Methods for sustainability research. Northampton (MA): Edward Elgar Publishing;  p. 142.
  • Reed MS, Evely AC, Cundill G, Fazey I, Glass J, Laing A, Newig J, Parrish B, Prell C, Raymond C, et al. 2010. What is social learning? Ecol Soc. 15(4): r1. [online]. http://www.ecologyandsociety.org/vol15/iss4/resp1/
  • Sinclair AJ, Diduck AP. 2001. Public involvement in EA in Canada: a transformative learning perspective. Environ Impact Assess. 21(2):113–136.10.1016/S0195-9255(00)00076-7
  • Sinclair AJ, Diduck AP. 2017. Reconceptualizing public participation in environmental assessment as EA civics. Environ Impact Assess. 62:174–182.10.1016/j.eiar.2016.03.009
  • Sinclair AJ, Diduck A, Fitzpatrick P. 2008. Conceptualizing learning for sustainability through environmental assessment: critical reflections on 15 years of research. Environ Impact Assess. 28(7):415–428.10.1016/j.eiar.2007.11.001
  • Slootweg R, Jones M. 2011. Resilience thinking improves SEA: a discussion paper. Impact Assess Project A. 29(4):263–276.
  • Snodgrass TJ, Kasi M. 1986. Function analysis: the stepping stones to good value. Madison (WI): University of Wisconsin.
  • Soria-Lara JA, Bertolini L, Brömmelstroet M. 2016. An experiential approach to improving the integration of knowledge during EIA in transport planning. Environ Impact Assess Rev. 56:188–199.10.1016/j.eiar.2015.10.007
  • Surahyo M, El-Diraby TE. 2009. Schema for interoperable representation of environmental and social costs in highway construction. J Constr Eng Manage. 135(4):254–266.10.1061/(ASCE)0733-9364(2009)135:4(254)
  • Tajima R, Fischer TB. 2013. Should different impact assessment instruments be integrated? Evidence from English spatial planning Environ Impact Assess Rev. 41:29–37.10.1016/j.eiar.2013.02.001
  • Taylor CN, Bryan CH, Goodrich CG. 2004. Social assessment: theory, process and techniques. 3rd ed. Lincoln (NE): Taylor Baines Associates.
  • Thomson CS, El-Haram MA, Emmanuel R. 2011. Mapping sustainability assessment with the project life cycle. Proc Inst Civil Eng-Eng Sustainability. 164(2):143–157.10.1680/ensu.2011.164.2.143
  • Thygesen J, Agarwal A. 2014. Key criteria for sustainable wind energy planning—lessons from an institutional perspective on the impact assessment literature. Renewable Sustainable Energy Rev. 39:1012–1023.10.1016/j.rser.2014.07.173
  • Vigar G. 2017. The four knowledges of transport planning: enacting a more communicative, trans-disciplinary policy and decision-making. Transp Policy. 58:39–45.10.1016/j.tranpol.2017.04.013
  • Voinov A, Kolagani N, McCall MK, Glynn PD, Kragt ME, Ostermann FO, Ramu P. 2016. Modelling with stakeholders–next generation. Environ Modell Softw. 77:196–220.10.1016/j.envsoft.2015.11.016
  • Wells C. 2015. The civic organization and the digital citizen. New York (NY): Oxford University Press.10.1093/acprof:oso/9780190203610.001.0001
  • White L, Noble BF. 2013. Strategic environmental assessment for sustainability: a review of a decade of academic research. Environ Impact Asses. 42:60–66.10.1016/j.eiar.2012.10.003

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.