2,072
Views
34
CrossRef citations to date
0
Altmetric
 

Abstract

The ideal of deliberation requires that citizens engage in reasonable discussion despite disagreements. In practice, if their experience is to match this normative ideal, participants in an actual deliberation should prefer moderate disagreement to conflict-free discussion within homogeneous groups, and to conflict-driven discussion where differences are intractable. This article proposes a research design and methods for assessing the quality of a deliberative event based on the perceptions of the participants themselves. In a structured deliberative event, over 2,000 individuals were assigned to small groups composed of about 10 persons of varying levels of ideological difference to discuss health care reform in California. We find that participants experience higher satisfaction with deliberation under moderate ideological difference than when they are in homogeneous or in highly disparate groups. That moderate disagreement induces optimal deliberation is consistent with normative expectations and empirically demonstrates the deliberative quality of this event.

Notes

1. In her pathbreaking book Beyond Adversary Democracy, Jane Mansbridge (Citation1980) argues that in contexts of deep disagreement, adversary modes of democratic decision making are more appropriate than those, like deliberation, that aim toward consensus in part because efforts to seek consensus can work to exclude and silence weaker parties. See also Sanders (Citation1997).

2. A reviewer describes this as a “Goldilocks” test for deliberative quality.

3. The results of our study rely on within-sample comparisons and hence do not rely on claims of the representativeness of participants. See Supplemental Material, Appendix A.

4. As we discuss in the Supplemental Material, Appendix A, in this study we only observe participants who volunteered to participate in a deliberative event, so we are unable to test whether nonparticipants would have had a similar reaction to the structured discussion. Assuming that selection matters, then one would need to take the selection process as an integral part of the design of structured deliberation, one that can be done well or poorly.

5. In many ways the design of this study resembles that of Deliberative Poll studies (e.g., Fishkin, Citation2009; Luskin, O’Flynn, Fishkin, & Russell, Citation2014). We compare and contrast our study design with that of deliberative polls in the Supplemental Material, Appendix B.

6. There are very few missing observations in the data, and we have good reasons to believe each participant at these sites filled out at least one of the surveys (the number of paper surveys returned at each site exceeded the number of people entering responses to the in-session polling keypads). At the four main sites we include in the analysis, only 202 participants (15.3%) failed to fill out a pretest survey, and 223 (16.9%) failed to fill out a post-test survey. In the following analyses, we impute the missing pre- or post-test data (but never both) under the assumption that the data are missing at random conditional on observed variables, using methods that account for uncertainty in the imputation (Tanner & Wong, Citation1987). The results are nearly identical using complete case analysis where the missing observations are dropped. In the Supplemental Material, Appendix E, we also present extensive sensitivity analysis that demonstrates our imputed results are robust to extreme assumptions regarding the missing data.

7. AmericaSpeaks (the organizers) contracted with the survey research firm NSON, which used random sampling techniques to recruit participants via e-mail and telephone. Fifty-one percent of participants were recruited by NSON. AmericaSpeaks worked with local groups in each locality to recruit the remainder of participants. AmericaSpeaks used this method of recruitment as a means to reach out to non-activists as well as to ensure participation from underrepresented groups (see Fung & Lee, Citation2008).

8. Participants noted their table number on both surveys. In Humboldt, none out of 396 participants failed to comply with their table assignment; in San Luis Obispo, none out of 264; in Sacramento, 3 out of 407; and in Riverside, 4 out of 250 participants.

9. Specifically, the organizers at each site divided up the tables into three groups. They assigned the first arriver to the first table of the first group, the second arriver to the second table of the first group, and so on. They repeated this process among the first group of tables until the tables in the first group were half-full, at which point they began assigning arrivers to the second group of tables. They repeated this process for the second and the third group of tables. When all of the tables were half-full, they used an identical assignment process to fill up the tables in the first group, then to fill up the tables in the second group, and then the third.

10. In , variables in circles are latent variables (so the value for each participant is a distribution, not a constant), variables in rectangles are measured in a survey (pretreatment variables are shaded), and the arrows indicate variables that are assigned to equations. For each indicator equation, we use an ordered logit link function estimating m − 1 thresholds, where m is the number of response categories, and a factor coefficient.

11. One might worry that a nonlinear aggregate pattern might emerge with a mixture of participants with disagreement-averse and disagreement-acceptant linear responses to disagreement. However, to create this nonmonotonic pattern, the weighting of this mixture of linear responses would need to be a function of disagreement, which cannot occur under the assumption that assignment to disagreement levels is ignorable. Because of the possibility of this mixture, failing to reject the null for the slope coefficients does not reject either the disagreement-averse or -acceptant linear response. Our main interest, however, is only in distinguishing the linear cases from the nonlinear case.

12. If, however, private disagreement indeed was not expressed, given a possible propensity of participants to avoid conflict (MacKuen, Citation1990), then the disagreement-curious response likely would not be induced. Instead, satisfaction would depend heavily on individuals’ preference distance from the group; increasing group diversity would only decrease deliberative satisfaction because the conversation would become more stilted or contrived. As we show later, however, we do not observe any relationship between an individual’s preference extremity and satisfaction.

13. The pretest also included a set of items measuring participants’ health care policy views on a set of items that do not scale with ideology. These items create a scale that is orthogonal to ideology, but this scale is not predictive in models we report. This confirms the longstanding notion that ideology structures and lends coherence to political debate (Hinich & Munger, Citation1994).

14. In the Supplemental Material, Appendix E, we show the results are identical with a different measure of dispersion based on absolute difference.

15. Participants cannot be randomly assigned to distance from the table mean since this is largely a function of the individual’s own ideological extremity; we include this measure as a pretreatment covariate and we show in the Supplemental Material, Appendix C, that this covariate is uncorrelated with the table-level measure of disagreement; including or excluding the individual-level disagreement measure does not affect the estimates for the coefficients for the table-disagreement variable, our causal variable.

16. We include the item regarding viewing Sicko because it loads very strongly on the ideology scale. This is no surprise since it is common for people to seek out media that offer views consistent with their predispositions (Sunstein, Citation2008). Empirically, among participants at the CaliforniaSpeaks event, self-reported liberals were 11 times more likely to see Sicko than conservatives, and so this item is valid for measuring ideology.

17. The programming for the table-level functions is based on Congdon’s Bayesian spatial models (Congdon, Citation2003, Chapter 7).

18. The mean level of ideological diversity across the tables is approximately 0.88 with a standard deviation of about 0.18.

19. Later we also test to see if ideological extremists tend to have a different response to disagreement than moderates (and, they do not).

20. We replicate these results using a simpler model to show the results are not somehow dependent on the Bayesian methods we use. In this simpler model, we first estimate factor scores for ideology, process satisfaction, and policy satisfaction. We then computed the table-level disagreement measure separately for each participant. Finally, using OLS we regressed each dependent variable, process quality and policy quality, on the disagreement measure and its square. The coefficients have identical signs and are all statistically significant.

21. Note that satisfaction is relative and we are not stating that participants at this event are dissatisfied with deliberation amid high disagreement in an absolute sense. Indeed, as Luskin and colleagues (2014) show, well-designed deliberation can be successful even when done across deep divides. In the present case, the overwhelming majority of participants either agreed or strongly agreed with each item measuring their satisfaction with the CaliforniaSpeaks event and remarkably few expressed any dissatisfaction with the event.

22. If one were to take the point estimates as true, the results would suggest that extremists can separate their evaluations of the quality of the process from their own satisfaction with the outcomes.

23. Note that this subgroup analysis does not constitute a data-driven “fishing expedition” since our intent is to show that the main effect is robust across subgroups, rather than to try to uncover effects within subgroups.

24. We dichotomize these variables in this way in order to make the statistical power for each subgroup in the statistical model roughly the same.

25. The full Bayesian model is no longer stable when we interact the moderating variable with the second-order term for the ideology scale; the data simply do not support this estimation. Instead, we rely on the methods described in Note 20 for this exploratory analysis. Because of the limitations we describe in that note, the standard errors do not retain their ordinary meaning, and here instead we simply rely on the standard errors as a heuristic to rate the precision of our estimates.

Additional information

Notes on contributors

Kevin M. Esterling

Keving M. Esterling is Professor, Department of Political Science, University of California–Riverside. Archon Fung is Professor, JFK School of Government, Harvard University. Taeku Lee is Professor, Department of Political Science, University of California–Berkeley.

Archon Fung

Keving M. Esterling is Professor, Department of Political Science, University of California–Riverside. Archon Fung is Professor, JFK School of Government, Harvard University. Taeku Lee is Professor, Department of Political Science, University of California–Berkeley.

Taeku Lee

Keving M. Esterling is Professor, Department of Political Science, University of California–Riverside. Archon Fung is Professor, JFK School of Government, Harvard University. Taeku Lee is Professor, Department of Political Science, University of California–Berkeley.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 265.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.