9,423
Views
252
CrossRef citations to date
0
Altmetric
Article

Does Compact Development Make People Drive Less?

Pages 7-18 | Published online: 10 Nov 2016
 

Abstract

Problem, research strategy, and findings: Planners commonly recommend compact development in part as a way of getting people to drive less, with the idea that less driving will lead to more sustainable communities. Planners base their recommendations on a substantial body of research that examines the impact of compact development on driving. Different studies, however, have found different outcomes: Some studies find that compact development causes people to drive less, while other studies do not. I use meta-regression analysis to a) explain why different studies on driving and compact development yield different results, and b) combine different findings from many studies into reliable statistics that can better inform planning practice. I address the following questions: Does compact development make people drive less, and if so, how much less? I find that compact development does make people drive less, because most of the compact development features I study have a statistically significant negative influence on driving. The impact, however, is fairly small: Compact development features do not appear to have much influence on driving. My findings are limited to some extent because they are derived from small sample sizes.

Takeaway for practice: Planners should not rely on compact development as their only strategy for reducing driving unless their goals for reduced driving are very modest and can be achieved at a low cost.

Notes

1. See for example, Buehler, Pucher, Gerike, and Gotschi (Citation2016); Lee, Lee, and Jun (Citation2014); Mohajeri, Gudmundsson, and French (Citation2015); Waygood, Sun, and Susilo (Citation2014); and Weinberger, Dock, Cohen, Rogers, and Henson (Citation2015).

2. The “precision” of a quantitative statistic like an elasticity is reflected in its standard error. Meta-regression analysis corrects for the effects of sampling error by weighting elasticities by the inverse of their standard errors, thus placing more weight on elasticities that were measured with greater precision.

3. Or regression coefficients, in cases where the studies do not report elasticities.

4. While studies with small sample sizes do not automatically produce relatively large elasticities, it has commonly been observed that such studies tend to report larger elasticities than do studies with large sample sizes. The generally accepted explanation for this observed phenomenon is that researchers who work with small sample sizes commonly respecify and retest their models until the models produce elasticities that are large enough to be declared statistically significant in spite of the small sample sizes that otherwise make it more difficult to produce statistically significant results (Stanley & Doucouliagos, Citation2012).

5. Technically, a user of meta-regression analysis can choose whether to place more weight on elasticities with larger sample sizes, or on elasticities with smaller standard errors. The latter is the preferred approach, and is the one that I use in my analyses. Although the approaches are not equivalent, it so happens that placing more weight on elasticities with smaller standard errors usually ends up placing more weight on elasticities with larger sample sizes at the same time because elasticities with smaller standard errors usually also have larger associated sample sizes.

6. Selective reporting bias has traditionally been referred to as “publication bias,” but Stanley and Doucouliagos (Citation2012) recommend the phrase “selective reporting bias” because this type of bias has been detected even among unpublished papers and because there is “no detectable difference in quality between published and unpublished papers as measured by the objective statistical criterion of (standard error)” (p. 12).

7. I use the term “planning literature” loosely to refer to the set of academic journals in which planning researchers typically publish their papers, although there is no hard and fast way to perfectly distinguish the planning literature from related literatures (e.g., urban studies, economics, transportation engineering, etc.).

8. The incentives to publish that academic researchers face include career advancement and salary increases that can accompany academic performance and success, especially through publishing their research in academic journals.

9. Researchers commonly use an elasticity’s standard error to measure the precision of the elasticity.

10. Meta-regression analysis is also widely used in other fields, including business and medicine.

11. I do not present findings for D-variables that have been studied in the literature but for which I only had two or fewer estimated ­elasticities.

12. Three is the minimum number of studies to permit a meta-analysis (Treadwell, Tregear, Reston, & Turkelson, Citation2006).

Additional information

Notes on contributors

Mark R. Stevens

Mark R. Stevens ([email protected]) is an associate professor at the University of British Columbia.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 226.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.