176
Views
11
CrossRef citations to date
0
Altmetric
Original Articles

Assessing the effect of energy technology labels on preferences

&
Pages 245-265 | Received 07 Jun 2012, Accepted 15 Apr 2013, Published online: 24 May 2013
 

Abstract

This paper investigates the effect of using labelled versus generic unlabelled alternatives in choice experiments (CEs) in the case of a multidimensional environmental good (power generation) that is often associated with strong prior beliefs and emotions. Specifically, it assesses the effect of naming selected low-carbon energy technologies on the underlying choices, the implicit prices for the technology attributes and the total economic values attached to their environmental benefits. Our findings are only mildly suggestive of a labelling effect where respondents employ different processing strategies when confronted with labels, focusing principally on the label and/or considering attributes differently. In the case of power generation, the use of labelled alternatives led to significantly different estimated attribute parameters; in contrast, most implicit prices remained undistinguishable and computed welfare measures were found to be statistically equivalent.

JEL Classification:

Acknowledgements

The authors thank the Greek State Scholarship Foundation (IKY) and the University of London Central Research Fund for funding this research, and two anonymous referees for valuable comments and suggestions.

Notes

**Statistically different at 5% level compared to the SE England population characteristics.

(**)Statistically different at 5% level compared to the labelled CE sample characteristics.

Note: SE, South East; data for 2007/2008.

Source: NOMIS/ONS (2009).

**Statistically different distribution of responses at 5% level compared to the LABEL treatment.

Note: Figures do not add up to 100 due to rounding of numbers; D, disagree; U, unsure; A, agree.

Note: All demographic, knowledge and attitudinal variables are interacted with each of the ASCs (ASCW, ASCB, ASCN, ASCLC) in model estimation.

***, **, *Significant at 1%, 5% and 10% level respectively.

Note: p-weight = median income; the sign of the standard deviations is irrelevant – interpret as positive (Hole Citation2007).

***, **Significant at 1% and 5% level respectively.

Note: 95% confidence interval calculated using the Delta method (Greene Citation1997).

***, **Significant at 1% and 5% level respectively.

1. Other alternative-specific attributes or bundles of attributes could have arguably been added to the design, for example, explicitly informing respondents about issues such as the risk of accidents, radioactive leakages, etc. The impact of omitted attributes on choices cannot be separately estimated in the design and is confounded with the labels, making the isolation of the effect of reduced attention to the attributes potentially less precise. As in all CE studies, we had to balance the need to describe the options as fully as possible, by including multiple attributes and levels, with the need to keep the choices cognitively manageable for respondents. Moreover, including radioactive risks as an attribute for example would in effect identify the nuclear power option and transform the generic CE into a ‘labelled’ CE. Our pilot testing did not detect any perceived major omissions in our experimental design.

2. The mixed logit model can also be used without a random-coefficients interpretation, but as representing error components that create correlations among the utilities for different alternatives (non-zero error components) and thus relax the independence of irrelevant alternatives (IIA) property of the standard logit model (Train Citation2009). The appropriate choice of variables to enter as error components allows for obtaining various correlation patterns. Random-coefficient and error component specification are formally equivalent (Train Citation2009) and the study's aim (preference heterogeneity or substitution patterns) will dictate which specification should be selected. In our study, we selected the random-coefficients specification, although an error-component investigation would also provide interesting information, given the possibility of respondents’ prior views of the labels being correlated with the alternatives’ attributes (Hensher, Rose, and Greene Citation2006).

3. Preferences towards the price attribute are assumed to be homogeneous (i.e. non-random parameter) to facilitate the estimation of welfare measures (Chen and Cosslett Citation1998).

4. Effects coding is a functionally equivalent approach to dummy coding, which however facilitates interpretation as the base level impact is equal to the negative sum of the parameter values for the other levels (Louviere, Hensher, and Swait Citation2000).

5. Each model was estimated with 50 Halton draws that are considered to be a reasonably acceptable number of simulation draws (Hensher, Rose, and Greene Citation2006). Using 100 Halton draws instead did not produce any significant differences in the overall model fit and statistical significance of explanatory variables.

6. Responses to our CE debriefing questions suggest that LABEL CE respondents associated the Distance from home attribute mostly with safety and health impacts.

7. In the case of the RPL, the grid-search procedure yields a global optimum for the LL function at the relative scale parameter (Louviere, Hensher, and Swait Citation2000).

8. Of course, as the parameter vector and scale factor are confounded, the rejection of Hypothesis 1 implies that it is not possible to attribute real differences in parameters and/or scale factors to parameter vector inequality only, scale inequality only or both (Swait and Louviere Citation1993). Blamey et al. (Citation1999, 4) suggest: ‘In the event that H1 is rejected for a given set of parameters, it is desirable to try and isolate the source of the violation by examining the plot of coefficient vectors and allowing these parameters to differ between datasets. H1 testing is then repeated for the component of the parameter vectors subject to the equality restriction’.

9. Furthermore, the difference in the number of options between the LABEL and UNLABEL treatments may have also resulted in a difference in cognitive burden, in addition to the effect of labels on respondents’ cognitive processes.

10. The Delta method has been found to produce reasonably accurate confidence intervals when compared to alternative procedures such as the Krinsky and Robb and the bootstrap simulation methods (Hole Citation2006)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.