Abstract
Assessment of learning outcomes and evaluation of teaching methods are necessary in order to ensure that students are learning the lessons that faculty believe they are conveying. Quantitative data on the effectiveness of various pedagogical methods allows faculty to make adjustments to classes over time. Regular assessment of student learning outcomes allows for the collection of hard data in order to show the effectiveness of teaching techniques and activities. A pre- and posttest survey was administered to participants in EuroSim 2007, a cross-continent EU simulation run by the Trans-Atlantic Consortium for European Union Studies and Simulations (TACEUSS). This paper analyzes the results of those surveys and examines the ability of evaluation/assessment surveys to capture the effectiveness of simulations in promoting affective learning and discovering changes in patterns of student interactions as outlined by Greenblat (Citation1973). Preliminary results indicate support for the argument that simulations spur and increase affective learning on the part of students. This survey will be an ongoing project, collecting data at all future EuroSims in an effort to provide a strong database of information regarding the effectiveness and usefulness of large simulations as pedagogical tools.
Notes
The surveys were kept anonymous in order to insure that students felt comfortable responding honestly to the questions. Some of the participating schools grade their students on participation in EuroSim and it was thought that if students believed they were recognizable this would bias their responses.
Skidmore College and Hamilton College did not participate in either the pre- or postsimulation survey, and the University of Antwerp was the only university to respond to the first survey and to leave before responding to the postsimulation survey, so the net loss of population was quite small.
See Appendix I for a list of universities that participated in EuroSim 2007.
See Appendix I for answer coding for open-ended questions.
The categories for the ordinal variables were collapsed from the original five (strongly agree, agree, neutral, disagree, strongly disagree) to three (agree, neutral, disagree) in order to insure as many cases per cell as possible in the cross tabs. Additionally, separate cross tabs for U.S. and EU students were run, but there were too many cells with fewer than five cases to make the results useful. See Appendix II for cross tabulation results.
The chi-square value was significant; however, seven cells had fewer than five cases.
See Appendix I for answer coding for open-ended questions.
Variation in levels of knowledge among the students has always been an issue; previous EuroSims displayed a similarly wide range of education levels (freshmen to graduate law students) among participants.
Again, cross tabs were run separately for U.S. and European students but there were too many cells with less than five cases. Even then, the combined cross tabs had a number of cells with fewer than five cases.
This result is tempered by the fact that nine cells had fewer than five cases—all in the “Disagree” and “Neutral” categories.