1,130
Views
4
CrossRef citations to date
0
Altmetric
Articles

An assessment of interventions for improving communication and trust in cost benefit analysis processes

, &
Pages 28-42 | Received 20 Dec 2013, Accepted 24 Jun 2014, Published online: 10 Sep 2014

Abstract

Evaluation literature suggests that assessments of integrated transport plans should be an inclusive dialogue, for which it is crucial that participants communicate with and trust each other. However, cost benefit analysis (CBA) of integrated transport plans is often characterized by communication deficits and distrust among plan owners and evaluators. A literature review suggested five communication and trust-building interventions and related mechanisms that might improve this. In this paper, we have tested the efficacy of these five communication and trust-building interventions by applying them in an experiential study with two sequential cases, representing ‘close to real’ situations. The research aimed to develop field-tested knowledge to address the aforementioned class of CBA process problems. The research demonstrated how the five interventions could facilitate an exchange of information, knowledge and experiences, which – according to the participants – will increase the effectiveness of the CBA. Furthermore, it illustrated that a communication and trust-building strategy such as the one tested might be a useful complement to CBA practices, if adapted to the characteristics of the specific assessment process and planning context.

1. Introduction

1.1 A problematic use of CBAs

Cost benefit analysis (CBA), grounded in welfare economics, is a frequently used method for assessing transport plans (Vickerman Citation2000; ECMT Citation2004; Odgaard et al. Citation2005; Annema et al. Citation2007). However, significant doubts have been raised whether CBA is usable in integrated transport planning contexts (Mackie & Preston Citation1998; Jong & Geerlings Citation2003; Naess Citation2006; Annema et al. Citation2007; Mackie Citation2010; Beukers et al. Citation2012). Especially complex transport land-use plans that aim to influence fundamental but hard-to-grasp aspects of societal challenges like sustainability or quality of life are difficult to fit into a CBA logic (Vaughan et al. Citation2000; Ziller & Phibbs Citation2003; Van Wee & Molin Citation2012). There are a number of content-related problems that hamper assessment in CBA of such plans: disputable calculation methods for translating ‘soft’ variables like quality of nature into money (Mackie & Preston Citation1998; Vaughan et al. Citation2000; Ziller & Phibbs Citation2003), missing information about winners and losers and equity issues (Ackerman & Heinzerling Citation2002) and misrepresentation of long-term and irreversible effects (van Wee Citation2011), to name a few (see also Mouter et al. Citation2013 for an overview of CBA-related problems).

In addition, the process of applying CBA itself is often highly problematic (Sager & Ravlum Citation2005; Damart & Roy Citation2009; Eliasson & Lundberg Citation2010; Martinsen et al. Citation2010; Veisten et al. Citation2010; Beukers et al. Citation2012). Several studies on planning practitioners' perceptions indicate that CBAs of integrated transport plans are mainly perceived as a final judgment of a plan's fulfillment of CBA criteria, leading some planners to see it as just another obstacle to overcome instead of an opportunity for learning and for improving the plan as desired by many practitioners (ECMT Citation2004; Page et al. Citation2009; Beukers et al. Citation2012). Furthermore, these processes are often characterized by severe communication deficits and distrust between the involved plan owners and evaluators, who tend to act in opposition to each other (Beukers et al. Citation2012). This study, for example, found that plan owners fear a dominance of easier-to-calculate ‘hard’ effects over difficult-to-calculate ‘soft’ effects in the decision making and perceive the CBA as not understandable. On the other hand, evaluators perceived that plan owners use CBAs strategically by embellishing their plans (see also Morrison-Saunders et al. Citation2014 on communication difficulties across disciplines with integrated impact assessments).

1.2 Assessing integrated transport plans

This problematic use of CBA for assessing integrated transport plans seems rooted in the changing transport planning context. Transport plans have become ever more intertwined with economic, social, environmental and spatial issues. Integrated approaches to transport planning therefore have become increasingly relevant (Banister Citation2008; Straatemeier & Bertolini Citation2008). This has an effect on how transport plans should be assessed (Willson Citation2001; Curtis Citation2008; Handy Citation2008; Hull et al. Citation2011). First, issues like quality of life and sustainability have become core subjects of transport planning (Banister Citation2008). However, the impact of plans on these issues is hard to measure or to translate into monetary terms (Mackie & Preston Citation1998). Moreover, these issues are context- and location-specific (Allmendinger Citation2002). This means that local knowledge and experience, often brought in by local experts and lay people, is essential for an accepted assessment (Runhaar & Driessen Citation2007; Stoeglehner et al. Citation2009). Second, and relatedly, transport planning is no longer an isolated profession; it continuously interacts with planning processes and professionals from other domains (e.g. land use, environment, health) (Bertolini et al. Citation2008). Third, interaction among stakeholders often takes place at early, more strategic, phases of the planning process, which indicates that the supporting instruments used need to be sensitive to different dimensions, perspectives, goals and professional languages (Owens et al. Citation2004; Handy Citation2008; Healey Citation2009).

Because of this new context, assessments of integrated transport plans should be an inclusive dialogue in which social realities are constructed and networks of people, actions and thoughts are built, rather than just providing objective and technical rational analyses (Khakee Citation2003; Selicato & Maggio Citation2011). These assessments should ideally take place in an ongoing creative process of gradual learning between plan owners and evaluators, and between plans and analyses. Thus, it is crucial that actors communicate with and trust each other, actively engage and appreciate each other, recognize entrenched positions and struggles and gain insight into their strong and weak points (Saarikoski Citation2000; Kidd & Fisher Citation2007; van Buuren & Nooteboom Citation2009). At the same time, the limits of this deliberative planning due to, for instance, the influence of power imbalances should be also recognized (Huxley & Yiftachel Citation2000). However, CBA processes are not characterized as such. On the contrary, as mentioned, practitioners often perceive CBAs as a final judgment rather than a learning process, and plan owners and evaluators experience communication deficits and distrust.

1.3 Communication and trust as conditions for using the CBA for learning

The preceding discussion suggests that a different approach to CBAs when assessing integrated transport plans might be necessary, whereby high levels of communication and trust between plan owners and evaluators could form crucial conditions for using CBA for learning, see Figure .

Figure 1 Conceptual model on communication and trust conditions for a learning use of CBA. Source: Beukers et al. (Citation2014).
Figure 1 Conceptual model on communication and trust conditions for a learning use of CBA. Source: Beukers et al. (Citation2014).

A review of communicative planning and organizational learning literature followed this line of reasoning and structured existing insights on how to improve the levels of communication and trust between different types of experts, like plan owners and evaluators in CBA processes (Beukers et al. Citation2014). This review concluded with five interventions and related mechanisms to increase the levels of both trust and communication (Table ). The interventions thereby represent the action and the mechanisms explain how and why related communication and trust effects are expected to happen.

Table 1 Interventions and related mechanisms for improving communication and building trust between plan owners and evaluators in CBA processes (Beukers et al. Citation2014).

1.4 Aim and setup of this article

The literature review suggested an array of interventions and related mechanisms that are argued to improve communication and trust between experts, like plan owners and evaluators. However, their actual effectiveness in practice has not been tested, let alone in the context of CBAs of integrated transport plans. To improve the academic rigor and practical relevance of such theory-derived interventions and related mechanisms, testing them in the context of their intended use is fundamental (Van Aken Citation2004). Furthermore, by focusing on both the if (does it work?) as well as the how (how does it work, or not work?), testing also allows for improvements to be made in the fit of the interventions to the specific characteristics of CBA processes (Straatemeier et al. Citation2010). Therefore, we aim to answer the following questions: Do the interventions derived from the theoretical literature improve communication and trust between plan owners and evaluators in CBA processes, and through which mechanisms? The research, thus, aims to develop ‘field-tested abstract knowledge’ to solve a class of CBA process problems (communication and trust issues between plan owners and evaluators). Such knowledge should not be interpreted as a prescription or recipe, but as a ‘design example of grounded technological rules’ to address these problems (Van Aken Citation2004).

Table gives an overview of the tested interventions and related mechanisms. In short, the interventions ask for plan owners and evaluators to (1) meet, (2) share and discuss the plan and CBA, (3) be prepared for the interaction, (4) have the interaction guided by a moderator and (5) use dialogue modes, presenting the work as ‘work in progress.’ The fifth intervention thereby calls for interaction early in the planning process, when the plan and assessment are still open for discussion (Runhaar & Driessen Citation2007).

The article is organized as follows. Section 2 discusses the methodological setup. We then present the research findings of the two cases in Sections 3 and 4. In Section 5, conclusions are drawn and further research steps are discussed.

2. Research methodology

To understand the effectiveness of the theory-derived communication and trust-building interventions, we follow the logic of the ‘experiential case study’ research design (Straatemeier et al. Citation2010). This approach allows practitioners and researchers to experience and reflect upon the interventions by applying them in an academically constructed case representing a ‘close to real’ situation. An important element of such an approach is finding a balance between a situation close to practice (to guarantee ecological validity) while maintaining enough distance for all participants to enable critical questioning and analysis (to ensure internal validity; see Van Aken Citation2004). Based on the experiences and reflections of the participants with the application of the interventions in the case, the interventions can be refined and tested in a subsequent case. Such reflective cycles result in insights into if and how underlying mechanisms work, as in ‘realistic evaluation’ (Pawson & Tilly Citation1997). However, although this approach is crucial for gaining in-depth insight for answering our research question, a consequence (due to its time intensiveness) is that it can only cover a small sample. This limits the ability to generalize our findings.

Here, two such experiential cases were set up. The situation close to practice was found in two early staged CBAs of integrated transport plans in Amsterdam (the capital and largest city of the Netherlands) and Utrecht (the fourth largest city of the Netherlands). The participating practitioners were the actual plan owners who had been involved, whereas the evaluators were invited by the research team and were not previously involved. Furthermore, the cases were clearly set up as an academic exercise; the participants were informed beforehand of the scientific, rather than practical, aim of the meeting.

Both cases followed the same procedure (Figure ): performing the five interventions (dashed box in Figure ), measuring the impact on the related communication and trust mechanisms and analyzing the acquired data. Thereby, the application of the interventions in Case 2 was improved through the experiences and reflections from Case 1 through an experiential cycle. The precise operationalization of the applied interventions is further explained in Sections 3 and 4 (see also Table ).

Figure 2 Setup, experiential case study.
Figure 2 Setup, experiential case study.

Table 2 Overview of the operationalization of the interventions in Case 1, the lessons learned, and the refined operationalization in Case 2.

2.1 How the information was analyzed

Five complementary quantitative and qualitative measurement instruments were applied to analyze the effects of the interventions on communication and trust between the participants. These mixed methods focused on different aspects of the communication and trust mechanisms (individual and group perceptions, interaction dynamics) and enabled triangulation (Bryman Citation2008, p. 611).

First, the interactions were observed by a member of the research team who was present in person, but not participating (i.e. structured observation, Bryman Citation2008, p. 257). This observer ranked statements about the specific communication and trust mechanisms on a five-point Likert scale for the participating plan owners and evaluators. For example, Mechanism 1 in Table , ‘Sharpening assumptions based on the input of others,’ was operationalized in the observation report as ‘The plan owners sharpened their assumptions based on the input of the evaluators’ (this statement was repeated with ‘the evaluators’ in place of ‘the plan owners’). Second, individual participants filled out a questionnaire (Bryman Citation2008, p. 216) containing statements on a five-point Likert scale describing their personal experiences of communication and trust from the meeting. These statements were similar as those ranked by the observer. For example, Mechanism 1 was operationalized in the questionnaire as ‘I sharpened my assumptions based on the input of the others.’ Third, the interactions were recorded on video and analyzed by a different member of the research team. By using video, the analysis could focus on the role of the different participants – planners, evaluators and the moderator – during all stages of the case (Bryman Citation2008, p. 476).

Fourth, participants were invited to discuss in a focus group setting if, how and why they thought the interventions influenced their communication and trust experiences (Bryman Citation2008, p. 473). Five statements were used to structure this discussion: ‘We had enough room (both literally, in terms of time, and figuratively) to perform a dialogue’; ‘We could discuss the plan and CBA simultaneously. Now they both feel more as a product of our own (a sense of ownership)’; ‘The dialogue would have been less fluent without having prepared ourselves for the dialogue’; ‘The moderator was crucial in the dialogue’; and ‘The plan and CBA were clearly works in progress and were open for our input.’ Fifth, and finally, two participants per case – one plan owner and one evaluator – were interviewed some weeks after the experience. These ‘reflective interviews’ functioned as a control for the results from the other research methods. Moreover, they gave some insight on if, how and why the interventions influenced real-world planning processes and daily routines after the meeting.

The experiential case study thus resulted in rich quantitative and qualitative information about if and how the five applied interventions affected communication and trust between the participants in the two cases. Therefore, although the case study consists of a small sample, the variety of measurements enabled triangulation of our findings and increased their reliability (Bryman Citation2008, p. 611).

3. Experiential Case 1 – a CBA of station area development in Amsterdam

3.1 Applying the interventions in Case 1

In Case 1, the five theoretical interventions were operationalized as follows (Table ). Six plan owners, four evaluators and a moderator met for approximately 1 h (excluding preparation and follow-up – see Figure ) to jointly discuss an actual CBA assignment of an integrated transport plan (Intervention 1). The plan owners who participated had been involved in the actual planning process and represented different municipal authorities from the local districts and the central city. The four evaluators were asked by the research team to participate based on their knowledge of the CBA method.

The discussed plan proposed investments in train station areas in Amsterdam (the capital and largest city of the Netherlands) to facilitate transit-oriented development (Municipality of Amsterdam Citation2008). However, Amsterdam has at least 10 important train stations, and the municipality intended to use the CBA to prioritize investments for the different stations. In the case study, the CBA setup for two station areas was discussed: Sloterdijk and Lelylaan (Intervention 2), see Figure .

Figure 3 (Colour online) Train stations in and around Amsterdam. The potential development of the station areas of Sloterdijk and Lelylaan, in red circles, was discussed in Case 1. Adapted from: Municipality of Amsterdam (Citation2008).
Figure 3 (Colour online) Train stations in and around Amsterdam. The potential development of the station areas of Sloterdijk and Lelylaan, in red circles, was discussed in Case 1. Adapted from: Municipality of Amsterdam (Citation2008).

The participants were prepared at the start of the meeting by first discussing behavioral guidelines, underscoring the importance of being open, critical, non-defensive, sincere and aware of differences; getting to know the other's interests; and actively asking questions (Intervention 3). The discussion was guided by a moderator, a senior consultant who had experience facilitating similar discussions. He was asked by the research team to focus on: making everybody feel safe; letting participants speak freely; including all types of knowledge; focusing on finding shared interests and win–win solutions; and finding issues behind immediate agendas (Intervention 4). The discussion itself was guided by three substantive questions written on a memo board placed in the middle of the group that was to function as the dialogue mode (Intervention 5): What are the expected effects of the plan? How to measure them? And how to include these effects in the CBA?

3.2 Observing and reflecting on Case 1: influencing interpersonal communication and trust?

This section presents the prominent findings of Case 1 for each intervention based on the five different measurement methods, describing whether and how communication and trust between the participating plan owners and evaluators increased. The findings from the questionnaire are shown in Figure . Furthermore, all quotes are translated from Dutch.

Figure 4 Communication and trust mechanisms perceived by plan owners and evaluators (five-point Likert scale), questionnaire Case 1.
Figure 4 Communication and trust mechanisms perceived by plan owners and evaluators (five-point Likert scale), questionnaire Case 1.

3.2.1 Intervention 1: room for dialogue

The participants in Case 1 noted in the questionnaire that they felt safe, open and able to listen to each other, as these mechanisms had a Likert score above average (see Mechanisms 5, 7 and 8 in Figure ). In the focus group discussion, the evaluators mentioned that their openness could have been influenced by not being actually involved in the case and not having any vested interests. Nevertheless, the evaluators were perceived by the plan owners as rather critical, which they mentioned in the group discussion. For example, after the plan was introduced by one of the plan owners, a participating evaluator asked, ‘So, what are you actually aiming for, and what are the goals? I missed that in your introduction.’ In the reflective interview, this evaluator explained that he felt the plan should have been more problem-solution structured. Instead, the plan owners mainly presented general normative statements like, ‘The station should function as a gate to the city of Amsterdam,’ and ‘Our aim is to better include this part of the city [being at the edge of the municipality] by extending the spatial qualities of the city center.’

The participants seemed to experience difficulties with understanding concepts and reasoning outside of their own domains. When the participants were asked to reflect on Intervention 1 in the focus group discussion, they pointed out that the time spent together was too little to build up a relationship and share a lot of information, especially because the discussed plan was perceived as complex and premature.

3.2.2 Intervention 2: sharing and discussing the plan and CBA together and simultaneously

Relating to Intervention 2, and in contrast with the findings of Intervention 1, the evaluators felt that they learned more about the plan and the participants perceived that they understood each other's viewpoints and reasoning (Mechanisms 12 and 15 scored above average on the questionnaire). In the focus group discussion, the participants stated that they appreciated being able to discuss together, with plan owners and evaluators, in such an early planning stage as in the experiential case, which is not common in the average Dutch CBA process (Beukers et al. Citation2012). One of the evaluators mentioned, ‘It is better to find out together what the assignment is exactly about than alone behind your desk.’ And a plan owner felt the discussion helped him to realize that their plan remained rather vague in CBA terms: ‘What are the indicators that matter [in CBAs]? It is clear to me now that we need to talk about that further.’

3.2.3 Intervention 3: being prepared

In the group discussion, the participants thought that although Intervention 3 was helpful in creating an open and fair discussion in a comfortable atmosphere, the behavior guideline used would have been more effective and necessary if participants were holding strong opposing viewpoints. As the participating evaluators were not involved with the actual planning and assessment, they mentioned feeling neutral towards the discussed topic.

Furthermore, the participants thought the preparation fell short on the content side, which limited the breadth and depth of the shared information. For example, one evaluator mentioned, ‘If we had had some homework [documentation about the plan], we could have been more critical.’ Also, one plan owner mentioned in the reflective interviews, ‘I didn't feel comfortable with the CBA terms that were discussed, and would have liked to know more about it in advance.’ Such information, however, was not provided in this case.

3.2.4 Intervention 4: facilitating discussion using a moderator

The participants perceived that the moderator helped them to be open in the discussion, speak freely and focus on finding issues behind immediate agendas (Mechanisms 20, 24 and 25). However, the moderator did not spur the participants to explain jargon or planning and CBA concepts, as mentioned earlier, so the discussion remained somewhat superficial. So, although the moderator played an important role in creating a comfortable atmosphere for discussion, he did not pay enough attention to explicitly sharing different types of knowledge – technical and non-technical, implicit and explicit, and across disciplines (Mechanism 23).

3.2.5 Intervention 5: use of dialogue modes, presented as ‘work in progress’

The fifth intervention for this case corresponded to the evaluators feeling encouraged to share their viewpoints and discussing the plan and CBA as work in progress (Mechanisms 28 and 30). However, in the video analysis and reflective interview with the participating plan owners, it seemed that the three substantive questions that steered the dialogue (What are the expected effects of the plan? How to measure them? And how to include these effects in the CBA?) were too CBA-focused and too precise, considering the prematurity of the plan. Thus, the dialogue mode used mainly related to the evaluators' reasoning, and as such was not very effective in stimulating communication and trust in the interaction. It hardly stimulated the participants to share more and richer knowledge, as the discussion remained somewhat superficial.

Nevertheless, the participants mentioned in the group discussion that both the plan and CBA were perceived as ‘work in progress’ and this helped to encourage their input. The evaluators, for example, felt they could influence the plan and the setup of its assessment: ‘The plan owners adopted what was being said, so that made me understand the plan was still in development.’

3.3 Reflecting on Case 1: refining the interventions

The findings from Case 1 show diverse results in whether the five interventions helped to improve communication and trust between plan owners and evaluators, and how they may have done so. Most positively influenced seemed to be the mechanisms of feeling safe, being open towards each other, listening, asking critical questions and learning more about the plan and/or CBA. Less positively influenced mechanisms seemed to be sharing detailed information, being explicit about the reasoning behind one's viewpoints, forming new insights together and seeing shared interests (Figure ). Overall, the participants in Case 1 seemed to have minimally increased their mutual understanding and the discussion remained somewhat superficial.

However, Case 1 also illustrated that such confusion and difficulties understanding each other were still valuable for the plan owners, as it made them realize their plan remained rather vague for outsiders. It also became more clear to them what elements would be important when confronted with CBA reasoning. Furthermore, the position of the evaluators as not being actually involved showed different effects. On the one hand, the evaluators mentioned they felt less critical because they did not have any vested interest. On the other hand, this ‘neutrality’ might have increased the ability of the participants to be open in the discussion.

3.3.1 Lessons from Case 1

Related to the sequential setup of the experiential case study, four possible reasons were identified for the limited effectiveness of the interventions in Case 1. First, the time dedicated to the dialogue did not seem to be commensurate with the complexity of the discussed topic. Second, the dialogue seemed to require better preparation of the participants for the discussion's content, not just process rules about attitudes towards the discussion. Third, the moderator could have paid more attention to urging the participants to be explicit and explain any jargon used. And fourth, the three questions that steered the dialogue did not invite all participants to share their viewpoints or react to the views of others, as they mainly related to the evaluators' reasoning being too CBA-oriented.

Although other reasons could be given for the limited effectiveness of the interventions in Case 1 (for example the personal characteristics of the participants or the political context of the planning task), the four aforementioned explanations were used to refine Interventions 1, 3, 4 and 5 as follows (leaving Intervention 2 unchanged – see also Table ):

  • Improving Intervention 1 by devoting more time to the discussion

  • Improving Intervention 3 by also preparing the participants for the content of the discussion in advance

  • Improving Intervention 4 by asking the moderator to encourage participants to be explicit and avoid jargon

  • Improving Intervention 5 by using a dialogue mode that related to planning and CBA rationale

4. Experiential Case 2 – a CBA of tramway routes in the Utrecht region

The experiential case study continued with applying the refined interventions in a second case following the same procedure as shown in Figure (see also Table for an overview of the operationalization of the interventions in Case 2).

4.1 Applying the refined interventions in Case 2

In Case 2, seven plan owners, two evaluators and a moderator met for approximately 2 h (refined Intervention 1) to discuss a CBA assignment of an integrated transport plan. The plan dealt with the development of a second tramway in the region of Utrecht in the Netherlands, initiated by the regional planning authority of Utrecht. The plan proposed two different tramway routes connecting the eastern and western parts of the region (Figure ). Both routes were expected to increase the livability of the city center by replacing the buses that ran through it by a tram (Intervention 2). As in Case 1, the plan owners had been involved in the actual planning process and represented different governmental parties: the regional planning authority of Utrecht, the spatial planning and economic development departments of the city of Utrecht and the smaller neighboring municipality of Zeist. The two evaluators and the moderator were asked by the research team to participate and were not previously involved. The moderator was a different person than in Case 1, but had similar experience with guiding CBA processes. One of the two evaluators also participated in Case 1. Both evaluators had extensive experience in conducting complex CBAs.

Figure 5 Discussed tramway routes in the Utrecht region (gray lines). Adapted from: Bestuursregio Utrecht (Citation2012).
Figure 5 Discussed tramway routes in the Utrecht region (gray lines). Adapted from: Bestuursregio Utrecht (Citation2012).

Performing the communication and trust-building interventions, a member of the research team had a preliminary meeting with the coordinating plan owner to review the information needed during the discussion, which the participants received beforehand (refined Intervention 3). Furthermore, a member of the research team had a preliminary meeting with the moderator to talk about her role in the discussion and instruct her to encourage participants to be explicit and explain any jargon used (refined Intervention 4).

The discussion itself was steered by a dialogue tool called ‘the Effects Arena’ (Figure ) (refined Intervention 5). Effects Arena was inspired by a tool developed to support public housing companies when assessing plans with CBA (Stichting Experiment Volkshuisvesting Citation2010) and entails four main questions: Who are the initiators?; What is the actual initiative?; What are the expected effects?; and What is the expected impact (including where and on whom)? Furthermore, the Effects Arena explores how the initiative could lead to the expected effects, how the effects could lead to the expected impacts, and pays attention to both the local and regional contexts. Effects Arena is a visual, interactive tool that aims to invite all participants to pick up a pencil and fill in their thoughts, to respond to each other and to jump from one question to another and back. The Effects Arena was printed on an A1-sized paper and placed in the middle of the table to invite all participants to write or draw on it.

Figure 6 The Effects Arena. Adapted from: Stichting Experiment Volkshuisvesting (Citation2010).
Figure 6 The Effects Arena. Adapted from: Stichting Experiment Volkshuisvesting (Citation2010).

4.2 Observing Case 2: influencing interpersonal communication and trust?

In line with Case 1, this paragraph presents the prominent findings of Case 2 for each intervention based on the five different measurement methods, describing whether and how communication and trust between the participating plan owners and evaluators increased. The findings from the questionnaire are shown in Figure . Furthermore, all quotes are translated from Dutch.

Figure 7 Communication and trust mechanisms perceived by plan owners and evaluators (five-point Likert scale), questionnaire Case 2.
Figure 7 Communication and trust mechanisms perceived by plan owners and evaluators (five-point Likert scale), questionnaire Case 2.

4.2.1 Intervention 1: room for dialogue

Case 2 scored relatively high on Mechanisms 1–8 and 11 (Figure ): sharpening assumptions together, asking and receiving critical questions, giving attention to issues behind immediate agendas, listening carefully, sharing diverse information, being open in the discussion, feeling safe and getting to know each other. These mechanisms were apparent in the following example: The group discussed the assumption that replacing buses by a tramway would increase the quality of the public space of the city center. Some participants were skeptical about this, which encouraged the plan owners to explain how they thought this could happen. In response, one of the evaluators mentioned, ‘Well, solely replacing buses by trams would not be enough if the quality of the public space forms such an important element of the plan. Then you need to invest in that, too.’ This was perceived by the plan owners as a valuable change of scope, which they mentioned in the group discussion, as the plan was mainly concerned with investing in the tram infrastructure. Another example of the mentioned mechanisms at work was when one of the local plan owners wondered in the middle of a critical discussion, ‘Why are we actually aiming for a tramway, isn't that old-fashioned?’ This question provoked – besides laughter – the plan owners of the regional authority to further explain their reasoning: expecting that a tram could be more reliable, robust, and provide a more pleasant traveling experience. So, although the tone in the discussion was critical, this did not irritate the plan owners, or make them feel being attacked. Instead, they took it as an opportunity to explain and to refine their ideas.

Relatively low scores were measured for the mechanisms of building up a relationship and estimating how others would respond in the discussion (Mechanisms 9 and 10). Nevertheless, the participants mentioned in the group discussion that the 2 h dedicated to the dialogue was enough to discuss the plan thoroughly, and if more time was available their concentration may have waned.

4.2.2 Intervention 2: sharing and discussing the plan and CBA together and simultaneously

The participants perceived that they understood the viewpoints of others and vice versa (Mechanism 15). The evaluators felt they had a good grasp of the plan and its relevant issues. One evaluator mentioned, ‘Discussing the plan and CBA in such a setting is far more vivid and dynamic than normal, when the main topic of discussion is negotiating over a sole assumption.’ The other evaluator added, ‘This [discussion] was also much more informative then reading about the plan in a memo, which normally happens.’

Furthermore, the plan owners also thought they got a better understanding of the CBA technique, reasoning, possibilities and limitations, as they mentioned in the group discussion and reflective interview. According to the participants, this happened because the evaluators explained throughout the discussion how certain effects would be approached in a CBA. For example, that ‘attracting more businesses’ would be considered as an indirect effect, and that ‘atmosphere of the public space’ could be included in the CBA qualitatively. Furthermore, the group discussed whether the plan could be made more effective if some parts of the plan would be phased in differently. Through this discussion, new alternatives (i.e. a rearrangement of the original plan) emerged.

4.2.3 Intervention 3: being prepared

Intervention 3 scored relatively high on Mechanisms 16–19 of not regarding unknown information as a threat, articulating one's own ideas and reasoning, and not feeling unpleasantly surprised when new arguments were raised (see the examples already given). The participants thus expressed feeling prepared for the discussion, and that this helped them to be critical in the discussion (the evaluators), without feeling attacked when critical issues were raised (the plan owners).

However, as in Case 1, the participants during the group discussion stated that the behavior guide was not that necessary in this particular instance, but would have been if conflicting parties interacted – ‘If it was about getting funding it is a different story,’ one plan owner said. Nevertheless, they thought the right persons with adequate knowledge were present at the table, and that this was an important element for what they considered to be a fruitful discussion.

4.2.4 Intervention 4: facilitating discussion with a moderator

Mechanisms 20–25 of Intervention 4 scored well overall. According to both the questionnaire and the observation, the moderator focused the discussion on finding issues behind the immediate problem and ensured that everybody felt safe and could speak freely. For example, the moderator encouraged one of the plan owners from the economic department of the municipality to share his thoughts about the business areas that might be affected by the initiated tramway. Without this encouragement, a valuable insight could have been missed, as this plan owner appeared to be a rather silent person.

Furthermore, it was mentioned in the group discussion that the moderator helped the group be time efficient, inclusive and explicit. She achieved this by using simple questions, like ‘What do you mean by that?’; ‘Can you explain that viewpoint further?’; or ‘Do we all agree with that or are there different viewpoints?’ before proceeding to the next box in the Effects Arena.

4.2.5 Intervention 5: use of dialogue modes, presented as ‘work in progress’

The scores of the fifth intervention's mechanisms were relatively high for feeling encouraged to react to others, giving viewpoints and seeing the discussed subject as ‘work in progress’ (Mechanisms 26, 29 and 30). This last mechanism was apparent, for example, when alternatives outside of the initial scope were discussed (see the example discussed under Intervention 2).

Furthermore, the participants mentioned in the group discussion and reflective interviews that the Effects Arena was of great support to the dialogue for sharing a lot of knowledge without becoming too broad or vague. A plan owner shared his experience saying, ‘Often discussions like these end up in ineffective chit chat without clear goals or agenda. An advantage of this meeting was having a structure that guided us how to approach the issues at stake.’ However, the participants missed some indicators in the Effects Arena, like ‘interests,’ ‘uncertainties,’ and ‘preconditions.’

4.3 Reflecting on Case 2

Compared to Case 1, Case 2 resulted in higher measured communication and trust effects of the interventions for both plan owners and evaluators, as shown in Figure . The refined interventions thus seemed to better fit the assignment in Case 2: the devoted time of 2 h was considered enough, the preparation of both the behavior and the content was seen as effective, the moderator was judged to be prepared for her role, and the discussion tool was perceived as suited for its task. Furthermore, the participants in Case 2 concluded during the group discussion and reflective interviews that they thought the participants had the appropriate knowledge and experience to perform the CBA assignment of the integrated transport plan at stake.

Figure 8 Average scores of the five interventions in the questionnaires in the Cases 1 and 2 (five-point Likert scale).
Figure 8 Average scores of the five interventions in the questionnaires in the Cases 1 and 2 (five-point Likert scale).

These factors led to what the participants of Case 2 described as a fruitful and rich discussion in which diverse knowledge and information was shared and formed, and in contrast with their previous experiences. In the group discussion and reflective interviews, both plan owners and evaluators expressed their perception the dialogue added value. The plan owners thought it was valuable that the evaluators brought in knowledge the plan owners did not have, which helped them in understanding how the CBA arrives at its conclusions, and determining whether a CBA was the appropriate assessment instrument for the plan at stake. As such, it became less of an insider's tool, which they expected to increase the overall support for and acceptance of the assessment. The plan owners also thought that the dialogue supported their critical view towards the plan. However, the plan owners also noted that having all stakeholders at the table would be necessary to get a complete picture.

From the evaluators' point of view, the dialogue was valuable for sharpening the problem analysis, and vividly understanding the issues behind the plan. They expected this last point to support these issues in receiving more and adequate attention in a CBA report, even if it would not change the final CBA ratio. Furthermore, the evaluators expected that the dialogue could help them to work more effectively if all effects were discussed and explained by both sides. Through this, they thought, the analysis could fit better with the experiences of plan owners, and be better accepted and supported. Moreover, when its methods were widely shared, the evaluators thought the CBA could become less threatening to non-CBA experts.

5. Conclusion and reflection

5.1 Conclusion

This paper started by discussing the problematic use of the CBA when assessing integrated infrastructure plans, and the process characteristics of communication deficits and distrust between plan owners and evaluators. Furthermore, rather than only used as a final judgment, a learning use is desired, for which improving the communication and trust between plan owners and evaluators seem crucial. We introduced five potential communication and trust improvements (interventions and related mechanisms) derived from the academic literature, which were tested in an experiential case study design with two sequential cases. Our main research question was: Do the interventions, derived from the theoretical literature, improve communication and trust conditions between plan owners and evaluators in CBA processes in practice, and through which mechanisms?

Notwithstanding the limitations of the research setup, which will be discussed below, our findings indicate that the five interventions have some influence on the conditions for interpersonal communication and trust in CBA processes, if they are adapted to fit the assessment context, as demonstrated in the case study. However, some mechanisms appeared easier to measure than other. The observer had, for example, trouble estimating ‘Strengthening relationships with others.’ Also, the mechanisms of perceiving ‘win–win solutions’ and ‘shared interests’ seemed rather abstract to the participants upon reflection in the questionnaire, which they mentioned in the group discussion. Nevertheless, the interventions seemed to positively influence communication and trust enhancing mechanisms through the following ways:

  • The impact of deliberatively organizing a meeting between the plan owners and evaluators, as such meetings do not always happen spontaneously in CBA processes.

  • The impact of discussing the plan and CBA simultaneously early on; including participants with knowledge and experience with the plan or CBA.

  • The impact of being prepared to have an open attitude towards each other, the plan, and the CBA as well as on the content of the discussion; understanding one's own reasoning, viewpoints and role in the discussion.

  • The impact of having such a discussion guided by a moderator who is able to create an open and safe atmosphere, and who also creates an explicit and jargon-free discussion.

  • The impact of using an effective dialogue mode that invites different views and encourages others to react, and which structures the discussion and facilitates interaction.

5.2 Reflection: practice and theory

In addition to testing and discussing how communication and trust levels between plan owners and evaluators were improved, our research also underscored that an application of theoretical innovations in practice hardly works without some fine-tuning. The experiential case study, with two sequential cases, was supportive for this transition and narrowed the gap between theory and practice by making the interventions more context-specific. Furthermore, the experiential case study increased the chances that participants will actually adopt the planning innovation by allowing both researchers and practitioners to gain experience with it (Straatemeier et al. Citation2010). After Case 2, there were indeed signs that the participants were interested in adopting the performed interventions in their daily practices. These experiences form arguments for embracing research approaches like the experiential case study.

The findings of case study also resonated with insights from planning theory, and at the same time indicated where further research could focus. For example, that it is crucial for participants in an assessment process to actively engage with each other, appreciate each other, recognize commitments and struggles and gain insight into strengths and weaknesses (Kidd & Fisher Citation2007). Moreover, the research indicated that a meeting as such is not enough. Specific effort is necessary to collectively create meaning by not only addressing formal knowledge, but also including other types of knowledge, like participants' own experiences, or personal stories (Innes Citation1998; Nonaka et al. Citation2000; Healey Citation2009). The Effects Arena dialogue tool that was used in Case 2 seemed important for this, as it invited the participants to share a lot of knowledge without becoming too broad or vague, as advocated by Forester (Citation1999) and Schön (Citation1983). However, the assessment and planning context of such a CBA dialogue seemed important in deciding its characteristics: how much time is needed, which questions need to be raised, what instructions are necessary for the participant roles, what should be the level of preparation, who needs to participate, and what discussion tool needs to be used. More research could reveal how a communication and trust-building strategy should be applied in other phases of CBA assessment processes and planning contexts. This additional research could also explore the role of the different personal characteristics of participants.

The setup of the experiential case study, though, also has some disadvantages, such as limited validity and reliability of the research results. Besides the limitation of the small sample (two cases with 19 participants in total), as mentioned in Section 2, the case study was largely an academic setting. How the communication and trust-building interventions could function in real planning contexts, and how they support the use of CBA as a learning tool, warrants more investigation. Furthermore, the participants selected themselves (they were willing to cooperate) and knew they were participating in an academic experiment. As such, the possible impact of, for example, their personalities was not taken into account. Moreover, the interventions were performed only once in both cases. Participants suggested that they should be performed several times during a CBA process and possibly with various participants, such as citizens or politicians. Again, further investigation is needed to understand how a communication and trust-building strategy might function throughout the whole CBA process, and when other factors like power imbalances between participants are at play.

All these questions point to two possible research strategies that go beyond the academic setting. One could be to integrate the interventions into real-world planning processes and monitor their workings. This approach would require acceptance and commitment on the part of plan owners and evaluators, and enough research time and capacity for the planning process to be monitored as it unfolds. Another research strategy could be to assess ex post facto planning processes that, while not formally applying the five interventions, show clear differences in the way elements of the interventions have been applied, or not applied (e.g. planning processes where there were face-to-face meetings between plan owners and evaluators and where there were not, planning processes where a mediator was employed and where one was not, etc.). Such a context-rich research strategy could form a valuable addition to the context-poor but control-rich research that has been conducted.

References

  • AckermanF, HeinzerlingL. 2002. Pricing the priceless. Cost-benefit analysis of environmental protection. U Pa L Rev. 150(5):1553–1584.
  • AkgünAE, LynnGS, ByrneJC. 2003. Organizational learning: a socio-cognitive framework. Hum Relat. 56:839–868.
  • AllmendingerP. 2002. Towards a post-positivist typology of planning theory. Plan Theory. 1:77–99.
  • AnnemaJA, KoopmansC, van WeeB. 2007. Evaluating transport infrastructure investments: the Dutch experience with a standardized approach. Transport Rev. 27(2):125–150.
  • ArgyrisC. 1977. Double loop learning in organizations. Harvard Business Review. September–October:115–125.
  • ArgyrisC. 1991. Teaching smart people how to learn. Harvard Business Review. May 4.
  • BanisterD. 2008. The sustainable mobility paradigm. Transport Pol. 15(2):73–80.
  • BertoliniL, Le ClercqF, StraatemeierT. 2008. Urban transportation planning in transition (introduction to the theme issue). Transport Pol. 15(2):69–72.
  • BeukersE, BertoliniL, Te BrömmelstroetM. 2012. Why cost benefit analysis is perceived as a problematic tool for assessment of transport plans: a process perspective. Transport Res A Pol Pract. 46(1):68–78.
  • BeukersE, BertoliniL, Te BrömmelstroetM. 2014. Using cost benefit analysis as a learning process: identifying interventions for improving communication and trust. Transport Pol. 31:61–72.
  • Bestuursregio Utrecht. 2012. Snel, betrouwbaar en effectief. OV-visie voor de regio Utrecht [Quick, reliable, and effective. Public transport vision for the Utrecht region]. Utrecht: Bestuursregio Utrecht.
  • BrymanA.2008. Social research methods. Oxford: Oxford University Press.
  • CurtisC. 2008. Planning for sustainable accessibility: the implementation challenge. Transport Pol. 15(2):104–112.
  • DamartR, RoyB. 2009. The uses of cost-benefit analysis in public transportation decision making in France. Transport Pol. 16(4):200–212.
  • [ECMT] European Conference of Ministers of Transport. 2004. Assessment & decision making for sustainable transport. European Conference of Ministers of Transport, London, UK.
  • EdelenbosJ, KlijnEH. 2007. Trust in complex decision-making networks. A theoretical and empirical exploration. Admin Soc. 39:25–50.
  • EliassonJ, LundbergM. 2010. Do cost-benefit analyses influence transport investment decisions? Experiences from the Swedish transport investment plan, 2010–2021 Twelfth World Conference on Transport Research; July 11–15; Lisbon, Portugal.
  • ForesterJ. 1987. Planning in the face of conflict. J Am Plan Assoc.53(3):303–314.
  • ForesterJ. 1999. The deliberative practitioner. Encouraging participatory planning processes. Cambridge (MA): The MIT Press.
  • HandySL. 2008. Regional transportation planning in the US: an examination of changes in technical aspects of the planning process in response to changing goals. Transport Pol. 15(2):113–126.
  • HealeyP. 1999. Institutionalist analysis, communicative planning, and shaping places. J Plan Educ Res. 19(2):111–121.
  • HealeyP. 2007. Urban complexity and spatial strategies. Towards a relational planning for our times. London: Routledge.
  • HealeyP. 2009. In search of the ‘strategic’ in spatial strategy making. Plan Theory Pract. 10:439–457.
  • HullA, AlexanderER, KhakeeA, WoltjerJ, editors. 2011. Evaluation for participation and sustainability in planning. Oxon: Routledge.
  • HuxleyM, YiftachelO.2000. New paradigm or old myopia? Unsettling the communicative turn in planning theory. J Plan Educ Res. 19(4):333–342.
  • InnesJE. 1998. Information in communicative planning. J Am Plan Assoc.64:52–63.
  • InnesJE, BooherDE. 2003. Collaborative policymaking: governance through dialogue. In: HajerMA, WagenaarH, editors. Deliberative policy analysis. Understanding governance in the network society. Cambridge: Cambridge University Press; p. 33–59.
  • JongDM, GeerlingsH. 2003. Exposing weaknesses in interactive planning: the remarkable return of comprehensive policy analysis in The Netherlands. Imp Assess Proj Appraisal. 21:281–291.
  • KhakeeA. 2003. The emerging gap between evaluation research and practice. Evaluation. 9(3):340–352.
  • KiddS, FisherTB. 2007. Towards sustainability: is integrated appraisal a step in the right direction?Environ Plan C Gov Pol. 25:233–249.
  • LanderMC, PurvisRL, McCrayGE, LeighW. 2004. Trust-building mechanisms utilized in outsourced IS development projects: a case study. Inform Manage. 41:509–528.
  • LaurianL. 2009. Trust in planning: Theoretical and practical considerations for participatory and deliberative planning. Plan Theory Pract. 10:369–391.
  • MackieP. 2010. Cost-benefit analysis in transport: a UK perspective. Mexico: International Transport Forum.
  • MackieP, PrestonJ. 1998. Twenty-one sources of error and bias in transport appraisal. Transport Pol. 5:1–7.
  • MartinsenJA, OdeckJ, KjerkreitA. 2010. Why benefit-cost analyses matter less and how it can be improved for decision making in the transport sector – experiences from the Norwegian National Transport Plan 2010–2019. London: Association for European Transport and Contributors.
  • Morrison-SaundersA, PopeJ, GunnJAE, BondA, RetiefF. 2014. Strengthening impact assessment: a call for integration and focus. Imp Assess Proj Appraisal. 32(1):2–8.
  • MouterN, AnnemaJA, Van WeeB. 2013. Ranking the substantive problems in the Dutch cost-benefit analysis practice. Transport Res A.49:241–255.
  • Municipality of Amsterdam. 2008. Amsterdamse OV-visie 2008–2020. Een enkeltje Topstad[Amsterdam public transport vision 2008–2020. A one way ticket to top city]. Gemeente Amsterdam: Dienst Infrastructuur Verkeer en Vervoer.
  • NaessP. 2006. Cost-benefit analyses of transportation investments. Neither critical nor realistic. J Crit Real. 5:32–60.
  • NonakaI, KonnoN. 1998. The concept of ‘Ba’: building a foundation for knowledge creation. Calif Manage Rev. 40:40–54.
  • NonakaI, ToyamaR, KonnoN. 2000. SECI, ba and leadership: a unified model of dynamic knowledge creation. Long Range Plan. 33:5–34.
  • NonakaI, Von KroghG, VoelpelS. 2006. Organizational knowledge creation theory: evolutionary paths and future advances. Organ Stud. 27:1179–1208.
  • OdgaardT, KellyC, LairdJ. 2005. Current practice in project appraisal in Europe. White Rose Research Online, University of Leeds, Sheffield & York.
  • OwensS, RaynerT, BinaO.2004. New agendas for appraisal: reflections on theory, practice and research. Environ Plan A. 36:1943–1959.
  • PageM, KellyC, MayA, JonesP, ForresterJ. 2009. Enhancing appraisal methods to support sustainable transport and land use policies. Eur J Transport Infrastruct Res.9:296–313.
  • PawsonR, TillyN. 1997. Realistic evaluation. London: Sage.
  • RunhaarH, DriessenPJ. 2007. What makes strategic environmental assessment successful environmental assessment? The role of context in the contribution of SEA to decision-making. Imp Assess Proj Appraisal. 25(1):2–14.
  • SaarikoskiH. 2000. Environmental impact assessment (EIA) as collaborative learning process. Environ Imp Assess Rev. 20(6):681–700.
  • SagerT, RavlumIA. 2005. The political relevance of planners' analysis: the case of a parliamentary standing committee. Plan Theory. 4:33–65.
  • SchönDA. 1983. The reflective practitioner. How professionals think in action. Aldershot: Ashgate Publishing Limited.
  • SelicatoF, MaggioG. 2011. The evaluation process for a new planning culture: Regulatory compliance and learning opportunities. In: HullA, AlexanderER, KhakeeA, WoltjerJ, editors. Evaluation for participation and sustainability in planning. London: Routledge.
  • Stichting Experiment Volkshuisvesting. 2010. Effectenarena. Den Haag: Stichting Experiment Volkshuisvesting.
  • StraatemeierT, BertoliniL. 2008. Joint accessibility design: framework developed with practitioners to integrate land use and transport planning in the Netherlands. Transport Res Rec.2077:1–8.
  • StraatemeierT, BertoliniL, Te BrömmelstroetM, HoetjesP. 2010. An experiential approach to research in planning. Environ Plan B Plan Des. 37:578–591.
  • StoeglehnerG, BrownAL, KørnøvLB. 2009. SEA and planning: ‘ownership’ of strategic environmental assessment by the planners is the key to its effectiveness. Imp Assess Proj Appraisal. 27(2):111–120.
  • Van AkenJ. 2004. Management research based on the paradigm of the design sciences: the quest for field-tested and grounded technological rules. J Manage Stud.41(2):219–246.
  • Van BuurenA, NooteboomS. 2009. Evaluating strategic environmental assessment in the Netherlands: content, process and procedure as indissoluble criteria for effectiveness. Imp Assess Proj Appraisal.27(2):145–154.
  • Van WeeB. 2011. Transport and ethics. Ethics and the evaluation of transport policies and projects. Cheltenham: Edward Elgar.
  • Van WeeB, MolinE. 2012. Transport and ethics: dilemmas for CBA researchers. An interview-based study from the Netherlands. Transport Pol. 24:30–36.
  • VaughanWJ, RussellCS, RodriguezDJ, DarlingAC. 2000. Uncertainty in cost-benefit analysis based on referendum contingent valuation. Imp Assess Proj Appraisal. 18(2):125–137.
  • VeistenK, ElvikR, BaxC. 2010. Assessing conceptions of cost-benefit analysis among road safety decision-makers: misunderstandings or disputes?Imp Assess Proj Appraisal. 28(1):57–67.
  • VickermanR. 2000. Evaluation methodologies for transport projects in the United Kingdom. Transport Pol. 7:7–16.
  • WillsonR. 2001. Assessing communicative rationality as a transportation planning paradigm. Transportation. 28:1–31.
  • ZillerA, PhibbsP. 2003. Integrating social impacts into cost-benefit analysis: a participative method: case study: the NSW area assistance scheme. Imp Assess Proj Appraisal. 21(2):141–146.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.