331
Views
2
CrossRef citations to date
0
Altmetric
Articles

Artificial facilitation: Promoting collective reasoning within asynchronous discussions

Pages 214-231 | Published online: 19 Sep 2017
 

ABSTRACT

Online forums have become prominent in deliberative research as venues for infusing democracies with additional participatory elements. However, forums are criticized as being unsuitable for high-quality discussions as they predominantly induce self-expressive talk with little prospect for deliberative virtues. Drawing on the “argumentative theory of reasoning,” we hypothesize that an artificial facilitator (AF) ameliorates deliberative virtues by spurring interactions among the users and promoting knowledge gain. In a randomized field experiment, we show that the AF indeed enhances interaction, but only partially increases knowledge gain. Psychological variables mediate whether participants learn more in a facilitated online forum.

Acknowledgements

We would like to thank André Bächtiger for his immensely helpful comments and suggestions on earlier versions of this article. We are also grateful to the National Centre of Competence in Research Democracy (NCCR) of the Swiss Science Foundation (SNF) for funding this research.

Notes

1. Although some authors observe significant levels of reflection (i.e., knowledge gains and opinion changes) in synchronous chats (Grönlund et al., Citation2009; Luskin et al., 2006), we lack comprehensive evidence for the case of asynchronous discussions.

2. Besides the theoretical argument for the closeness between our variable and the need to evaluate concept, we also found empirical support: our variable “difficulties to form opinions” correlates negatively with political interest, age, education, frequent online media usage, as well as pointedly political attitudes. Psychological literature also finds inverse correlations for the need to evaluate concept and those variables (Bizer et al., 2000; Federico, 2004). This nomological validation suggests that both markers are inverse proxies for one another.

3. The argument scheme used by our argument tree is derived from the IBIS standard (Kuntz & Rittel 1972), which is emerging as a “lingua franca” for introducing relatively simple semantic structure to online discussions. Platforms such as Cohere (Buckingham Shum, 2008), Deliberatorium (Klein, Citation2012; Klein & Iandoli, Citation2008) and Debategraph (http://debategraph.org) are prominent examples of IBIS-based. Some applications have been specifically developed to inform policy formulation processes (Elliman et al. 2006; Renton & Macintosh 2007).

4. For customizing the interventions, the AF employs many forms of available variables and statistics: To name a few, it uses variables such as “type of argument,” “average rating,” “variance of ratings,” “number of positive rating,” “number of counter-arguments,” “number of positive-rated counter-arguments,” and “elapsed time since login.” It then applies formulas resembling the “attention-mediation metrics,” which are implemented in the argument tree software Deliberatorium (Klein, Citation2012).

5. Examples are “Do you agree to the opinion of user10,” “Please justify why you disagree with this argument,” or “In the view of this strong counter-argument, are you ready to revise your opinion?” The AF continues to intervene in an iterative process until participants log out or no task remains. As to context sensitivity, the AF can choose the phrase “Please outline why you disagree with this argument” only when a participant (a) negatively rated an argument and (b) did not justify his/her decision by creating a refuting argument on his/her own or rate an already existing refuting argument positively (thus indicating which argument he/she finds plausible). If more than one intervention fits, one is drawn randomly.

6. More precisely, the AF group contained more people who strongly agree with the initiative, whereas members of the control group had a higher concentration of strongly disagreeing people. In sum, this produced a significant difference between the two groups (t test: p = 0.04).

7. The support chat was hardly used and actually never for its intended goal to get assistance regarding the application. Only three participants made use of it. One of them complained about the polemic contributions of others; the two other ones expressed reservations about the software.

8. In fact, we estimated many different models probing the direct and indirect influence of numerous other variables. Yet, all the observations we discuss here were also confirmed in these models. We attach an extended model in the appendix.

9. Both effects passed all robustness tests. We also found them to correlate with each other (Pearson Correlation of 0.34). We additionally conducted propensity score matching to correct for potential flaws of the randomization procedure. Yet, those models did not indicate any change.

Additional information

Notes on contributors

Dominik Wyss

Dominik Wyss is a PhD candidate at the University of Lucerne in Switzerland and research assistant at the University of Stuttgart in Germany. His research focuses on measuring and enhancing deliberative quality in online and off-line discussion formats. The applied methodological approaches range from focus groups over online discussion field experiments to the analysis of social media data.

Simon Beste

Simon Beste is a PhD candidate at the University of Lucerne in Switzerland and research assistant at the University of Stuttgart in Germany. His dissertation project aims to measure and evaluate deliberative quality at the systemic level. His main methodological interests are in natural language processing, machine learning, topic modeling, and content analysis. He has published articles in journals such as Acta Politica, Journal of Public Deliberation, Swiss Political Science Review, and Journal of Legislative Studies.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 270.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.