234
Views
1
CrossRef citations to date
0
Altmetric
Discussions and Dialogue in the Classroom

Promoting Productive Political Dialogue in Online Discussion Forums

Pages 724-750 | Received 25 Oct 2019, Accepted 03 Aug 2020, Published online: 28 Aug 2020
 

Abstract

Discussion is a crucial component for learning in a college classroom. Increasingly, university and college faculty are using online learning management systems to facilitate and assess course discussions. Given this reality, are there ways to frame prompts to generate normatively better discussions, or discussions where students are better able to meet the course learning outcomes? To answer these questions, we utilize data from Introduction to American Government classes at two institutions with the students of three instructors who participated in online discussion boards on multiple substantive topics; for each topic, students were randomly assigned to one of two experimental conditions. In each, we framed a related prompt in different ways to test how such prompts impact student success as measured by several learning outcomes informed by the American Association of Colleges & Universities (AAC&U). With our unique experimental design and novel data, we are able to test several hypotheses related to student engagement, the content of discussions, as well as the quality of students’ work. In the end, our research has important normative implications for pedagogy as well as the cultivation of civility and political engagement inside and outside of the modern classroom environment.

Acknowledgments

Previous iterations of this paper were presented at the 2017 and 2019 American Political Science Association annual meetings in San Francisco and Washington, DC, respectively. We also thank Justin Brantley for his assistance compiling data.

Notes

1 Student behaviors and submissions with online discussion boards are often exercises in satisficing. By “meaningful, content-rich” discussions we mean evidence of engagement and consideration beyond a superficial “I agree with…” statement or some other banal minimalist behavior that is clearly instrumental in nature.

2 Online discussions spaced over the course of the semester allow for several relatively low-stakes assignments (compared to exams or term papers) for students to engage with the course material. Collectively, these assignments comprised 15% of the final grade for each course. We hoped these discussions would simulate discussions students may have outside of the classroom, but we acknowledge that the small contribution each discussion forum makes to the final grade could have an impact on the behavior of the students.

3 As discussed below, these experimental prompts include language that primes information relevant to each discussion topic. We expect students will interpret these frames differently from the students who receive the traditional prompts.

4 We chose our three specific outcomes of interest based on our collective goals for our students in introductory courses, which were informed by learning outcomes set by our departments and universities. We believe a similar research approach could inform instructors, regardless of their unique learning outcomes.

5 Researchers have even considered students’ perceptions of learning through the integration of F2F and online discussions (Bliuc et al. Citation2010).

6 Of course, “best practices” will strongly correlate with student achievement because those students exhibiting behaviors we call best practices are likely the same students who earn the best scores.

7 We would like to thank several anonymous reviewers for helping us think more critically about how we conceptualize the framing of our discussion prompts. In earlier versions of this research, we labeled some prompts as “provocative,” however, sometimes the language in our prompts could be read as leading, politically charged, or even activating a negative view toward the topic at hand. By classifying our experimental frames as “priming prompts,” we acknowledge that the interpretation of our frames may vary by audience, the substantive topic of the discussion, as well as the specific language intended to prime the reader. What our treatment prompts do, fundamentally, is bring to front-of-mind something specific that is likely to generate affective engagement.

8 Theoretically, each student should participate in four control conditions and four treatment conditions over the course of the semester; furthermore, the network of students participating within each board should be different for each topic and experimental condition.

9 Indeed, this could be the case for any president. However, as Donald Trump was in office during our study period, we use his name as the more politically charged reference.

10 presents the prompts and experimental conditions we used for our research; however, it is not our intent to suggest these are the ideal or only prompts to use for an introductory course in American politics. Indeed, our decisions on the language of our priming frames will impact the discussion to follow. As an anonymous reviewer rightly noted, using language such as “minorities” or “low income voters” may provoke implicit bias. Alternatively, the discussion may take a different tone if we had primed with terms like “voter equity,” or “social and economic justice.” Future research and practical application should always analyze the impacts of specific choices in priming language, perhaps through the use of additional experimental conditions.

11 While questions of external validity are fair in any experimental setting, we took several steps to improve the quality of our inferences. At the start of the semester, instructors informed students that discussion board posts would be utilized as part of an ongoing research project. Students reviewed and signed an IRB-approved consent form (Institution X Study ID: 17-0162; Institution Y: Human Subjects Review (HSR) approval 01/13/2017); however, they were not required to participate in the study (though they would still need to complete the course discussion board assignments). To assist with external validity, after this initial discussion the research study was not mentioned again, nor was the fact that students were assigned different prompts for each topic. As such, we hoped students would not think of the discussion as anything out of the ordinary for a college course.

12 At the conclusion of each discussion, we copied information from the LMS into a database, including the text of the post itself, the author, topic, timestamp, whether the post was an initial response to the prompt or a reply to one of their peers, and information about the specific course (e.g., semester, instructor, university, etc.).

13 Institution X utilizes Blackboard (http://www.blackboard.com/) while Institution Y uses Desire 2 Learn (https://www.d2l.com/).

14 To help account for the potential differences in the educational context surrounding each assignment, in the analysis to follow, we often disaggregate results by instructor or include appropriate controls in our regressions. In the end, we believe the value added by our broader sample outweighs the potential for differences in instruction across faculty members.

15 The minimum number of posts required to complete each assignment was 3 (1 initial post and 2 replies to classmates). Many students posted more than this minimum threshold for completion; however, a number of students did not submit two replies to their classmates.

16 For reference, an average page of standard text (double spaced, 1” margins, 12pt Times New Roman) contains approximately 250 words. Instructors did not give students a specific length requirement for initial posts or replies.

17 Before building word clouds, we take a number of steps to clean the data. First, we use a package adept at text mining (tm) (Feinerer et al. 2018) to remove punctuation, capitalization, ‘stopwords’ (e.g., a, and, the, etc.), and extra white space. In addition, we use the SnowballC (Bouchet-Valat Citation2019) package to perform word stemming, which collapses words to their root for easier text analysis (e.g., “politics,” “political,” and “politician” each become “polit”).

18 Here, we asked students to consider what the founding fathers might think of the state of the country today. In the treatment condition, we primed the “current partisan political climate and the Trump presidency.”

19 Students mentioned “Trump” just 13 times in the control group but 317 times in the treatment condition. The stem “parti-” (e.g., partisan, parties) was used 76 times in the control group and 172 times in the treatment group. Students brought up immigration nearly 4 times as often in the treatment group (98) compared to the control group (25).

20 One of the most common words used in both groups is “constitut-,” which was used 458 times in the control group and 351 times in the treatment group.

21 See the Figure A1 in the Appendix for comparison word clouds for the other topics.

22 While there are several R packages for these methods, here we utilized tidytext (Queiroz et al. 2019).

23 For additional information on this lexicon, please visit: http://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm.

24 This lexicon categorizes some words within multiple emotions/sentiments. For example, “abandon” evokes both fear and sadness but also reveals a negative sentiment.

25 For example, in the Campaigns & Elections board, there are 6,678 negative words and 6,905 positive words. As percentages of the total negative or positive words, this results in a Net Sentiment value of 1.6% (50.8% positive—49.2% negative), or a sentiment that is just slightly more positive than negative.

26 As an anonymous reviewer pointed out, this finding may be a function of students in an introductory course. In future research, it would be interesting to compare the Net Sentiment using a similar experimental design in an upper level seminar where students have greater grasp of the discipline.

27 VALUE rubrics are available from the AAC&U website (https://www.aacu.org/value/rubrics).

28 Results by instructor and by topic available upon request.

29 These variables also control for temporal grading variation for each instructor. We acknowledge, for example, that instructors may consciously or subconsciously grade differently at the start of the semester.

30 Full results available upon request.

31 Full results available upon request.

32 We conducted a preliminary readability analysis by topic, treatment condition, initial post vs. reply posts, and by instructor. Using the R package quanteda (Quantitative Analysis of Textual Data) (Benoit et al. Citation2018), we present the average number of words and sentences in each post in addition to five different readability measures, each of which returns a value that corresponds to the approximate grade level of the text.

Additional information

Notes on contributors

Aaron S. King

Aaron S. King is an associate professor and the coordinator of the undergraduate program in political science within the Department of Public & International Affairs at UNC Wilmington. His research and teaching focuses on several topics within American politics, including political institutions, ambition, representation, and competition.

J. Benjamin Taylor

J. Benjamin Taylor is an assistant professor of political science in the School of Government & International Affairs at Kennesaw State University. His research and teaching focuses on American politics with a specialty in political behavior and political communication.

Brian M. Webb

Brian M. Webb is an associate professor of political science at Gordon State College. His research and teaching focuses on American politics broadly, but with a particular focus on political institutions, Congressional procedure, and Georgia politics.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.